Test Report: KVM_Linux_crio 19652

                    
                      3ce3ac850d7f30e0226899d99df12771c4497062:2024-09-16:36238
                    
                

Test fail (10/213)

x
+
TestAddons/Setup (2400.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-682228 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-682228 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: signal: killed (39m59.954908504s)

                                                
                                                
-- stdout --
	* [addons-682228] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19652
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19652-713072/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19652-713072/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "addons-682228" primary control-plane node in "addons-682228" cluster
	* Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	  - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	  - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	  - Using image docker.io/registry:2.8.3
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	  - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	  - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	  - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	  - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	  - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	  - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	  - Using image ghcr.io/helm/tiller:v2.17.0
	  - Using image docker.io/marcnuri/yakd:0.0.5
	  - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	  - Using image docker.io/busybox:stable
	  - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	  - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	  - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	* To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-682228 service yakd-dashboard -n yakd-dashboard
	
	* Verifying registry addon...
	* Verifying ingress addon...
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	* Verifying csi-hostpath-driver addon...
	  - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	* Verifying gcp-auth addon...
	* Your GCP credentials will now be mounted into every pod created in the addons-682228 cluster.
	* If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	* If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	* Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, metrics-server, inspektor-gadget, helm-tiller, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 12:52:36.715791  721192 out.go:345] Setting OutFile to fd 1 ...
	I0916 12:52:36.715914  721192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 12:52:36.715923  721192 out.go:358] Setting ErrFile to fd 2...
	I0916 12:52:36.715928  721192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 12:52:36.716106  721192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
	I0916 12:52:36.716736  721192 out.go:352] Setting JSON to false
	I0916 12:52:36.717631  721192 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":9306,"bootTime":1726481851,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 12:52:36.717743  721192 start.go:139] virtualization: kvm guest
	I0916 12:52:36.719648  721192 out.go:177] * [addons-682228] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 12:52:36.720714  721192 out.go:177]   - MINIKUBE_LOCATION=19652
	I0916 12:52:36.720716  721192 notify.go:220] Checking for updates...
	I0916 12:52:36.722536  721192 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 12:52:36.723600  721192 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19652-713072/kubeconfig
	I0916 12:52:36.724659  721192 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 12:52:36.725697  721192 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 12:52:36.726719  721192 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 12:52:36.727880  721192 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 12:52:36.758893  721192 out.go:177] * Using the kvm2 driver based on user configuration
	I0916 12:52:36.759887  721192 start.go:297] selected driver: kvm2
	I0916 12:52:36.759899  721192 start.go:901] validating driver "kvm2" against <nil>
	I0916 12:52:36.759910  721192 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 12:52:36.760556  721192 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 12:52:36.760626  721192 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19652-713072/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 12:52:36.775391  721192 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 12:52:36.775444  721192 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 12:52:36.775683  721192 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 12:52:36.775712  721192 cni.go:84] Creating CNI manager for ""
	I0916 12:52:36.775752  721192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 12:52:36.775760  721192 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 12:52:36.775807  721192 start.go:340] cluster config:
	{Name:addons-682228 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-682228 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 12:52:36.775902  721192 iso.go:125] acquiring lock: {Name:mk66d96ffbd424a8ca76a8604dfbe200d58305de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 12:52:36.777998  721192 out.go:177] * Starting "addons-682228" primary control-plane node in "addons-682228" cluster
	I0916 12:52:36.778786  721192 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 12:52:36.778816  721192 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 12:52:36.778826  721192 cache.go:56] Caching tarball of preloaded images
	I0916 12:52:36.778931  721192 preload.go:172] Found /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 12:52:36.778945  721192 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 12:52:36.779245  721192 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/config.json ...
	I0916 12:52:36.779273  721192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/config.json: {Name:mk26ecdc8f840a7a292e8d789708488020bf118e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:52:36.779422  721192 start.go:360] acquireMachinesLock for addons-682228: {Name:mke8f8f8ba61009cdea7a3d88b50b9f6ae6e1362 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 12:52:36.779489  721192 start.go:364] duration metric: took 40.621µs to acquireMachinesLock for "addons-682228"
	I0916 12:52:36.779512  721192 start.go:93] Provisioning new machine with config: &{Name:addons-682228 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-682228 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 12:52:36.779572  721192 start.go:125] createHost starting for "" (driver="kvm2")
	I0916 12:52:36.780863  721192 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0916 12:52:36.780988  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:52:36.781028  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:52:36.794967  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38989
	I0916 12:52:36.795382  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:52:36.795912  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:52:36.795936  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:52:36.796256  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:52:36.796453  721192 main.go:141] libmachine: (addons-682228) Calling .GetMachineName
	I0916 12:52:36.796605  721192 main.go:141] libmachine: (addons-682228) Calling .DriverName
	I0916 12:52:36.796815  721192 start.go:159] libmachine.API.Create for "addons-682228" (driver="kvm2")
	I0916 12:52:36.796867  721192 client.go:168] LocalClient.Create starting
	I0916 12:52:36.796898  721192 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem
	I0916 12:52:36.911628  721192 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem
	I0916 12:52:37.178155  721192 main.go:141] libmachine: Running pre-create checks...
	I0916 12:52:37.178183  721192 main.go:141] libmachine: (addons-682228) Calling .PreCreateCheck
	I0916 12:52:37.178708  721192 main.go:141] libmachine: (addons-682228) Calling .GetConfigRaw
	I0916 12:52:37.179123  721192 main.go:141] libmachine: Creating machine...
	I0916 12:52:37.179138  721192 main.go:141] libmachine: (addons-682228) Calling .Create
	I0916 12:52:37.179248  721192 main.go:141] libmachine: (addons-682228) Creating KVM machine...
	I0916 12:52:37.180582  721192 main.go:141] libmachine: (addons-682228) DBG | found existing default KVM network
	I0916 12:52:37.181281  721192 main.go:141] libmachine: (addons-682228) DBG | I0916 12:52:37.181169  721214 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a60}
	I0916 12:52:37.181367  721192 main.go:141] libmachine: (addons-682228) DBG | created network xml: 
	I0916 12:52:37.181393  721192 main.go:141] libmachine: (addons-682228) DBG | <network>
	I0916 12:52:37.181404  721192 main.go:141] libmachine: (addons-682228) DBG |   <name>mk-addons-682228</name>
	I0916 12:52:37.181414  721192 main.go:141] libmachine: (addons-682228) DBG |   <dns enable='no'/>
	I0916 12:52:37.181421  721192 main.go:141] libmachine: (addons-682228) DBG |   
	I0916 12:52:37.181437  721192 main.go:141] libmachine: (addons-682228) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0916 12:52:37.181447  721192 main.go:141] libmachine: (addons-682228) DBG |     <dhcp>
	I0916 12:52:37.181458  721192 main.go:141] libmachine: (addons-682228) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0916 12:52:37.181466  721192 main.go:141] libmachine: (addons-682228) DBG |     </dhcp>
	I0916 12:52:37.181475  721192 main.go:141] libmachine: (addons-682228) DBG |   </ip>
	I0916 12:52:37.181482  721192 main.go:141] libmachine: (addons-682228) DBG |   
	I0916 12:52:37.181492  721192 main.go:141] libmachine: (addons-682228) DBG | </network>
	I0916 12:52:37.181502  721192 main.go:141] libmachine: (addons-682228) DBG | 
	I0916 12:52:37.186536  721192 main.go:141] libmachine: (addons-682228) DBG | trying to create private KVM network mk-addons-682228 192.168.39.0/24...
	I0916 12:52:37.251178  721192 main.go:141] libmachine: (addons-682228) DBG | private KVM network mk-addons-682228 192.168.39.0/24 created
	I0916 12:52:37.251212  721192 main.go:141] libmachine: (addons-682228) DBG | I0916 12:52:37.251132  721214 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 12:52:37.251244  721192 main.go:141] libmachine: (addons-682228) Setting up store path in /home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228 ...
	I0916 12:52:37.251265  721192 main.go:141] libmachine: (addons-682228) Building disk image from file:///home/jenkins/minikube-integration/19652-713072/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 12:52:37.251281  721192 main.go:141] libmachine: (addons-682228) Downloading /home/jenkins/minikube-integration/19652-713072/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19652-713072/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 12:52:37.515491  721192 main.go:141] libmachine: (addons-682228) DBG | I0916 12:52:37.515378  721214 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228/id_rsa...
	I0916 12:52:37.705247  721192 main.go:141] libmachine: (addons-682228) DBG | I0916 12:52:37.705105  721214 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228/addons-682228.rawdisk...
	I0916 12:52:37.705275  721192 main.go:141] libmachine: (addons-682228) DBG | Writing magic tar header
	I0916 12:52:37.705285  721192 main.go:141] libmachine: (addons-682228) DBG | Writing SSH key tar header
	I0916 12:52:37.705354  721192 main.go:141] libmachine: (addons-682228) DBG | I0916 12:52:37.705281  721214 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228 ...
	I0916 12:52:37.705502  721192 main.go:141] libmachine: (addons-682228) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228
	I0916 12:52:37.705528  721192 main.go:141] libmachine: (addons-682228) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228 (perms=drwx------)
	I0916 12:52:37.705537  721192 main.go:141] libmachine: (addons-682228) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072/.minikube/machines
	I0916 12:52:37.705551  721192 main.go:141] libmachine: (addons-682228) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 12:52:37.705561  721192 main.go:141] libmachine: (addons-682228) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072
	I0916 12:52:37.705571  721192 main.go:141] libmachine: (addons-682228) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 12:52:37.705579  721192 main.go:141] libmachine: (addons-682228) DBG | Checking permissions on dir: /home/jenkins
	I0916 12:52:37.705587  721192 main.go:141] libmachine: (addons-682228) DBG | Checking permissions on dir: /home
	I0916 12:52:37.705592  721192 main.go:141] libmachine: (addons-682228) DBG | Skipping /home - not owner
	I0916 12:52:37.705599  721192 main.go:141] libmachine: (addons-682228) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072/.minikube/machines (perms=drwxr-xr-x)
	I0916 12:52:37.705643  721192 main.go:141] libmachine: (addons-682228) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072/.minikube (perms=drwxr-xr-x)
	I0916 12:52:37.705685  721192 main.go:141] libmachine: (addons-682228) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072 (perms=drwxrwxr-x)
	I0916 12:52:37.705739  721192 main.go:141] libmachine: (addons-682228) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 12:52:37.705778  721192 main.go:141] libmachine: (addons-682228) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 12:52:37.705795  721192 main.go:141] libmachine: (addons-682228) Creating domain...
	I0916 12:52:37.706852  721192 main.go:141] libmachine: (addons-682228) define libvirt domain using xml: 
	I0916 12:52:37.706875  721192 main.go:141] libmachine: (addons-682228) <domain type='kvm'>
	I0916 12:52:37.706882  721192 main.go:141] libmachine: (addons-682228)   <name>addons-682228</name>
	I0916 12:52:37.706891  721192 main.go:141] libmachine: (addons-682228)   <memory unit='MiB'>4000</memory>
	I0916 12:52:37.706896  721192 main.go:141] libmachine: (addons-682228)   <vcpu>2</vcpu>
	I0916 12:52:37.706901  721192 main.go:141] libmachine: (addons-682228)   <features>
	I0916 12:52:37.706909  721192 main.go:141] libmachine: (addons-682228)     <acpi/>
	I0916 12:52:37.706918  721192 main.go:141] libmachine: (addons-682228)     <apic/>
	I0916 12:52:37.706928  721192 main.go:141] libmachine: (addons-682228)     <pae/>
	I0916 12:52:37.706935  721192 main.go:141] libmachine: (addons-682228)     
	I0916 12:52:37.706943  721192 main.go:141] libmachine: (addons-682228)   </features>
	I0916 12:52:37.706949  721192 main.go:141] libmachine: (addons-682228)   <cpu mode='host-passthrough'>
	I0916 12:52:37.706955  721192 main.go:141] libmachine: (addons-682228)   
	I0916 12:52:37.706961  721192 main.go:141] libmachine: (addons-682228)   </cpu>
	I0916 12:52:37.706967  721192 main.go:141] libmachine: (addons-682228)   <os>
	I0916 12:52:37.706971  721192 main.go:141] libmachine: (addons-682228)     <type>hvm</type>
	I0916 12:52:37.706976  721192 main.go:141] libmachine: (addons-682228)     <boot dev='cdrom'/>
	I0916 12:52:37.706980  721192 main.go:141] libmachine: (addons-682228)     <boot dev='hd'/>
	I0916 12:52:37.706985  721192 main.go:141] libmachine: (addons-682228)     <bootmenu enable='no'/>
	I0916 12:52:37.706994  721192 main.go:141] libmachine: (addons-682228)   </os>
	I0916 12:52:37.706999  721192 main.go:141] libmachine: (addons-682228)   <devices>
	I0916 12:52:37.707003  721192 main.go:141] libmachine: (addons-682228)     <disk type='file' device='cdrom'>
	I0916 12:52:37.707013  721192 main.go:141] libmachine: (addons-682228)       <source file='/home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228/boot2docker.iso'/>
	I0916 12:52:37.707019  721192 main.go:141] libmachine: (addons-682228)       <target dev='hdc' bus='scsi'/>
	I0916 12:52:37.707024  721192 main.go:141] libmachine: (addons-682228)       <readonly/>
	I0916 12:52:37.707039  721192 main.go:141] libmachine: (addons-682228)     </disk>
	I0916 12:52:37.707047  721192 main.go:141] libmachine: (addons-682228)     <disk type='file' device='disk'>
	I0916 12:52:37.707052  721192 main.go:141] libmachine: (addons-682228)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 12:52:37.707060  721192 main.go:141] libmachine: (addons-682228)       <source file='/home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228/addons-682228.rawdisk'/>
	I0916 12:52:37.707066  721192 main.go:141] libmachine: (addons-682228)       <target dev='hda' bus='virtio'/>
	I0916 12:52:37.707072  721192 main.go:141] libmachine: (addons-682228)     </disk>
	I0916 12:52:37.707076  721192 main.go:141] libmachine: (addons-682228)     <interface type='network'>
	I0916 12:52:37.707082  721192 main.go:141] libmachine: (addons-682228)       <source network='mk-addons-682228'/>
	I0916 12:52:37.707090  721192 main.go:141] libmachine: (addons-682228)       <model type='virtio'/>
	I0916 12:52:37.707095  721192 main.go:141] libmachine: (addons-682228)     </interface>
	I0916 12:52:37.707099  721192 main.go:141] libmachine: (addons-682228)     <interface type='network'>
	I0916 12:52:37.707105  721192 main.go:141] libmachine: (addons-682228)       <source network='default'/>
	I0916 12:52:37.707111  721192 main.go:141] libmachine: (addons-682228)       <model type='virtio'/>
	I0916 12:52:37.707116  721192 main.go:141] libmachine: (addons-682228)     </interface>
	I0916 12:52:37.707120  721192 main.go:141] libmachine: (addons-682228)     <serial type='pty'>
	I0916 12:52:37.707125  721192 main.go:141] libmachine: (addons-682228)       <target port='0'/>
	I0916 12:52:37.707132  721192 main.go:141] libmachine: (addons-682228)     </serial>
	I0916 12:52:37.707139  721192 main.go:141] libmachine: (addons-682228)     <console type='pty'>
	I0916 12:52:37.707144  721192 main.go:141] libmachine: (addons-682228)       <target type='serial' port='0'/>
	I0916 12:52:37.707149  721192 main.go:141] libmachine: (addons-682228)     </console>
	I0916 12:52:37.707154  721192 main.go:141] libmachine: (addons-682228)     <rng model='virtio'>
	I0916 12:52:37.707160  721192 main.go:141] libmachine: (addons-682228)       <backend model='random'>/dev/random</backend>
	I0916 12:52:37.707164  721192 main.go:141] libmachine: (addons-682228)     </rng>
	I0916 12:52:37.707189  721192 main.go:141] libmachine: (addons-682228)     
	I0916 12:52:37.707215  721192 main.go:141] libmachine: (addons-682228)     
	I0916 12:52:37.707247  721192 main.go:141] libmachine: (addons-682228)   </devices>
	I0916 12:52:37.707267  721192 main.go:141] libmachine: (addons-682228) </domain>
	I0916 12:52:37.707277  721192 main.go:141] libmachine: (addons-682228) 
	I0916 12:52:37.711871  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:a6:21:d3 in network default
	I0916 12:52:37.712480  721192 main.go:141] libmachine: (addons-682228) Ensuring networks are active...
	I0916 12:52:37.712498  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:37.713097  721192 main.go:141] libmachine: (addons-682228) Ensuring network default is active
	I0916 12:52:37.713476  721192 main.go:141] libmachine: (addons-682228) Ensuring network mk-addons-682228 is active
	I0916 12:52:37.713947  721192 main.go:141] libmachine: (addons-682228) Getting domain xml...
	I0916 12:52:37.714559  721192 main.go:141] libmachine: (addons-682228) Creating domain...
	I0916 12:52:38.890043  721192 main.go:141] libmachine: (addons-682228) Waiting to get IP...
	I0916 12:52:38.890781  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:38.891087  721192 main.go:141] libmachine: (addons-682228) DBG | unable to find current IP address of domain addons-682228 in network mk-addons-682228
	I0916 12:52:38.891123  721192 main.go:141] libmachine: (addons-682228) DBG | I0916 12:52:38.891078  721214 retry.go:31] will retry after 304.043967ms: waiting for machine to come up
	I0916 12:52:39.196461  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:39.196966  721192 main.go:141] libmachine: (addons-682228) DBG | unable to find current IP address of domain addons-682228 in network mk-addons-682228
	I0916 12:52:39.197002  721192 main.go:141] libmachine: (addons-682228) DBG | I0916 12:52:39.196927  721214 retry.go:31] will retry after 384.243971ms: waiting for machine to come up
	I0916 12:52:39.582571  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:39.582976  721192 main.go:141] libmachine: (addons-682228) DBG | unable to find current IP address of domain addons-682228 in network mk-addons-682228
	I0916 12:52:39.583001  721192 main.go:141] libmachine: (addons-682228) DBG | I0916 12:52:39.582936  721214 retry.go:31] will retry after 346.452072ms: waiting for machine to come up
	I0916 12:52:39.930606  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:39.931042  721192 main.go:141] libmachine: (addons-682228) DBG | unable to find current IP address of domain addons-682228 in network mk-addons-682228
	I0916 12:52:39.931073  721192 main.go:141] libmachine: (addons-682228) DBG | I0916 12:52:39.930996  721214 retry.go:31] will retry after 369.825435ms: waiting for machine to come up
	I0916 12:52:40.302864  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:40.303529  721192 main.go:141] libmachine: (addons-682228) DBG | unable to find current IP address of domain addons-682228 in network mk-addons-682228
	I0916 12:52:40.303559  721192 main.go:141] libmachine: (addons-682228) DBG | I0916 12:52:40.303461  721214 retry.go:31] will retry after 749.992882ms: waiting for machine to come up
	I0916 12:52:41.055384  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:41.055847  721192 main.go:141] libmachine: (addons-682228) DBG | unable to find current IP address of domain addons-682228 in network mk-addons-682228
	I0916 12:52:41.055878  721192 main.go:141] libmachine: (addons-682228) DBG | I0916 12:52:41.055800  721214 retry.go:31] will retry after 654.360651ms: waiting for machine to come up
	I0916 12:52:41.711666  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:41.712044  721192 main.go:141] libmachine: (addons-682228) DBG | unable to find current IP address of domain addons-682228 in network mk-addons-682228
	I0916 12:52:41.712066  721192 main.go:141] libmachine: (addons-682228) DBG | I0916 12:52:41.712003  721214 retry.go:31] will retry after 1.015920634s: waiting for machine to come up
	I0916 12:52:42.729702  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:42.730048  721192 main.go:141] libmachine: (addons-682228) DBG | unable to find current IP address of domain addons-682228 in network mk-addons-682228
	I0916 12:52:42.730079  721192 main.go:141] libmachine: (addons-682228) DBG | I0916 12:52:42.730010  721214 retry.go:31] will retry after 1.042372854s: waiting for machine to come up
	I0916 12:52:43.774155  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:43.774556  721192 main.go:141] libmachine: (addons-682228) DBG | unable to find current IP address of domain addons-682228 in network mk-addons-682228
	I0916 12:52:43.774582  721192 main.go:141] libmachine: (addons-682228) DBG | I0916 12:52:43.774514  721214 retry.go:31] will retry after 1.221937105s: waiting for machine to come up
	I0916 12:52:44.997792  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:44.998167  721192 main.go:141] libmachine: (addons-682228) DBG | unable to find current IP address of domain addons-682228 in network mk-addons-682228
	I0916 12:52:44.998198  721192 main.go:141] libmachine: (addons-682228) DBG | I0916 12:52:44.998119  721214 retry.go:31] will retry after 1.664248133s: waiting for machine to come up
	I0916 12:52:46.664846  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:46.665238  721192 main.go:141] libmachine: (addons-682228) DBG | unable to find current IP address of domain addons-682228 in network mk-addons-682228
	I0916 12:52:46.665259  721192 main.go:141] libmachine: (addons-682228) DBG | I0916 12:52:46.665200  721214 retry.go:31] will retry after 2.046931123s: waiting for machine to come up
	I0916 12:52:48.713487  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:48.713937  721192 main.go:141] libmachine: (addons-682228) DBG | unable to find current IP address of domain addons-682228 in network mk-addons-682228
	I0916 12:52:48.713965  721192 main.go:141] libmachine: (addons-682228) DBG | I0916 12:52:48.713882  721214 retry.go:31] will retry after 2.925490846s: waiting for machine to come up
	I0916 12:52:51.642990  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:51.643432  721192 main.go:141] libmachine: (addons-682228) DBG | unable to find current IP address of domain addons-682228 in network mk-addons-682228
	I0916 12:52:51.643459  721192 main.go:141] libmachine: (addons-682228) DBG | I0916 12:52:51.643365  721214 retry.go:31] will retry after 2.906514643s: waiting for machine to come up
	I0916 12:52:54.552882  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:54.553246  721192 main.go:141] libmachine: (addons-682228) DBG | unable to find current IP address of domain addons-682228 in network mk-addons-682228
	I0916 12:52:54.553273  721192 main.go:141] libmachine: (addons-682228) DBG | I0916 12:52:54.553192  721214 retry.go:31] will retry after 3.64574912s: waiting for machine to come up
	I0916 12:52:58.200630  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:58.201078  721192 main.go:141] libmachine: (addons-682228) Found IP for machine: 192.168.39.232
	I0916 12:52:58.201103  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has current primary IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:58.201110  721192 main.go:141] libmachine: (addons-682228) Reserving static IP address...
	I0916 12:52:58.201541  721192 main.go:141] libmachine: (addons-682228) DBG | unable to find host DHCP lease matching {name: "addons-682228", mac: "52:54:00:67:7f:50", ip: "192.168.39.232"} in network mk-addons-682228
	I0916 12:52:58.272857  721192 main.go:141] libmachine: (addons-682228) DBG | Getting to WaitForSSH function...
	I0916 12:52:58.272893  721192 main.go:141] libmachine: (addons-682228) Reserved static IP address: 192.168.39.232
	I0916 12:52:58.272906  721192 main.go:141] libmachine: (addons-682228) Waiting for SSH to be available...
	I0916 12:52:58.275401  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:58.275820  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:minikube Clientid:01:52:54:00:67:7f:50}
	I0916 12:52:58.275846  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:58.276083  721192 main.go:141] libmachine: (addons-682228) DBG | Using SSH client type: external
	I0916 12:52:58.276112  721192 main.go:141] libmachine: (addons-682228) DBG | Using SSH private key: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228/id_rsa (-rw-------)
	I0916 12:52:58.276144  721192 main.go:141] libmachine: (addons-682228) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.232 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 12:52:58.276158  721192 main.go:141] libmachine: (addons-682228) DBG | About to run SSH command:
	I0916 12:52:58.276171  721192 main.go:141] libmachine: (addons-682228) DBG | exit 0
	I0916 12:52:58.397315  721192 main.go:141] libmachine: (addons-682228) DBG | SSH cmd err, output: <nil>: 
	I0916 12:52:58.397597  721192 main.go:141] libmachine: (addons-682228) KVM machine creation complete!
	I0916 12:52:58.397953  721192 main.go:141] libmachine: (addons-682228) Calling .GetConfigRaw
	I0916 12:52:58.398531  721192 main.go:141] libmachine: (addons-682228) Calling .DriverName
	I0916 12:52:58.398714  721192 main.go:141] libmachine: (addons-682228) Calling .DriverName
	I0916 12:52:58.398851  721192 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 12:52:58.398866  721192 main.go:141] libmachine: (addons-682228) Calling .GetState
	I0916 12:52:58.400047  721192 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 12:52:58.400061  721192 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 12:52:58.400066  721192 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 12:52:58.400071  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHHostname
	I0916 12:52:58.402431  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:58.402773  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:52:58.402796  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:58.402918  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHPort
	I0916 12:52:58.403098  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:52:58.403234  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:52:58.403417  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHUsername
	I0916 12:52:58.403561  721192 main.go:141] libmachine: Using SSH client type: native
	I0916 12:52:58.403799  721192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0916 12:52:58.403814  721192 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 12:52:58.504657  721192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 12:52:58.504682  721192 main.go:141] libmachine: Detecting the provisioner...
	I0916 12:52:58.504693  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHHostname
	I0916 12:52:58.507107  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:58.507383  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:52:58.507403  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:58.507564  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHPort
	I0916 12:52:58.507775  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:52:58.507940  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:52:58.508085  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHUsername
	I0916 12:52:58.508233  721192 main.go:141] libmachine: Using SSH client type: native
	I0916 12:52:58.508400  721192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0916 12:52:58.508410  721192 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 12:52:58.606008  721192 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 12:52:58.606101  721192 main.go:141] libmachine: found compatible host: buildroot
	I0916 12:52:58.606113  721192 main.go:141] libmachine: Provisioning with buildroot...
	I0916 12:52:58.606125  721192 main.go:141] libmachine: (addons-682228) Calling .GetMachineName
	I0916 12:52:58.606345  721192 buildroot.go:166] provisioning hostname "addons-682228"
	I0916 12:52:58.606374  721192 main.go:141] libmachine: (addons-682228) Calling .GetMachineName
	I0916 12:52:58.606557  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHHostname
	I0916 12:52:58.608931  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:58.609233  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:52:58.609250  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:58.609399  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHPort
	I0916 12:52:58.609563  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:52:58.609724  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:52:58.609833  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHUsername
	I0916 12:52:58.609977  721192 main.go:141] libmachine: Using SSH client type: native
	I0916 12:52:58.610139  721192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0916 12:52:58.610150  721192 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-682228 && echo "addons-682228" | sudo tee /etc/hostname
	I0916 12:52:58.723001  721192 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-682228
	
	I0916 12:52:58.723030  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHHostname
	I0916 12:52:58.725660  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:58.726009  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:52:58.726040  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:58.726181  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHPort
	I0916 12:52:58.726375  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:52:58.726519  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:52:58.726629  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHUsername
	I0916 12:52:58.726761  721192 main.go:141] libmachine: Using SSH client type: native
	I0916 12:52:58.726931  721192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0916 12:52:58.726945  721192 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-682228' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-682228/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-682228' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 12:52:58.833375  721192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 12:52:58.833403  721192 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19652-713072/.minikube CaCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19652-713072/.minikube}
	I0916 12:52:58.833434  721192 buildroot.go:174] setting up certificates
	I0916 12:52:58.833448  721192 provision.go:84] configureAuth start
	I0916 12:52:58.833464  721192 main.go:141] libmachine: (addons-682228) Calling .GetMachineName
	I0916 12:52:58.833737  721192 main.go:141] libmachine: (addons-682228) Calling .GetIP
	I0916 12:52:58.836301  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:58.836643  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:52:58.836676  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:58.836781  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHHostname
	I0916 12:52:58.838775  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:58.839030  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:52:58.839053  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:58.839201  721192 provision.go:143] copyHostCerts
	I0916 12:52:58.839289  721192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem (1082 bytes)
	I0916 12:52:58.839414  721192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem (1123 bytes)
	I0916 12:52:58.839475  721192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem (1679 bytes)
	I0916 12:52:58.839559  721192 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem org=jenkins.addons-682228 san=[127.0.0.1 192.168.39.232 addons-682228 localhost minikube]
	I0916 12:52:59.310196  721192 provision.go:177] copyRemoteCerts
	I0916 12:52:59.310308  721192 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 12:52:59.310343  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHHostname
	I0916 12:52:59.312843  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:59.313129  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:52:59.313171  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:59.313306  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHPort
	I0916 12:52:59.313485  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:52:59.313623  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHUsername
	I0916 12:52:59.313745  721192 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228/id_rsa Username:docker}
	I0916 12:52:59.391160  721192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 12:52:59.413846  721192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 12:52:59.435507  721192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 12:52:59.457007  721192 provision.go:87] duration metric: took 623.545164ms to configureAuth
	I0916 12:52:59.457026  721192 buildroot.go:189] setting minikube options for container-runtime
	I0916 12:52:59.457215  721192 config.go:182] Loaded profile config "addons-682228": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 12:52:59.457316  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHHostname
	I0916 12:52:59.459752  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:59.460045  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:52:59.460071  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:59.460205  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHPort
	I0916 12:52:59.460393  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:52:59.460626  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:52:59.460790  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHUsername
	I0916 12:52:59.460991  721192 main.go:141] libmachine: Using SSH client type: native
	I0916 12:52:59.461209  721192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0916 12:52:59.461232  721192 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 12:52:59.668819  721192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 12:52:59.668848  721192 main.go:141] libmachine: Checking connection to Docker...
	I0916 12:52:59.668859  721192 main.go:141] libmachine: (addons-682228) Calling .GetURL
	I0916 12:52:59.670051  721192 main.go:141] libmachine: (addons-682228) DBG | Using libvirt version 6000000
	I0916 12:52:59.672389  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:59.672744  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:52:59.672769  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:59.672919  721192 main.go:141] libmachine: Docker is up and running!
	I0916 12:52:59.672937  721192 main.go:141] libmachine: Reticulating splines...
	I0916 12:52:59.672944  721192 client.go:171] duration metric: took 22.876067321s to LocalClient.Create
	I0916 12:52:59.672972  721192 start.go:167] duration metric: took 22.876156244s to libmachine.API.Create "addons-682228"
	I0916 12:52:59.673000  721192 start.go:293] postStartSetup for "addons-682228" (driver="kvm2")
	I0916 12:52:59.673016  721192 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 12:52:59.673037  721192 main.go:141] libmachine: (addons-682228) Calling .DriverName
	I0916 12:52:59.673277  721192 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 12:52:59.673301  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHHostname
	I0916 12:52:59.675444  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:59.675761  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:52:59.675789  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:59.675954  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHPort
	I0916 12:52:59.676134  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:52:59.676288  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHUsername
	I0916 12:52:59.676456  721192 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228/id_rsa Username:docker}
	I0916 12:52:59.755316  721192 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 12:52:59.759334  721192 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 12:52:59.759360  721192 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/addons for local assets ...
	I0916 12:52:59.759445  721192 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/files for local assets ...
	I0916 12:52:59.759474  721192 start.go:296] duration metric: took 86.465179ms for postStartSetup
	I0916 12:52:59.759515  721192 main.go:141] libmachine: (addons-682228) Calling .GetConfigRaw
	I0916 12:52:59.760068  721192 main.go:141] libmachine: (addons-682228) Calling .GetIP
	I0916 12:52:59.762528  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:59.762848  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:52:59.762872  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:59.763069  721192 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/config.json ...
	I0916 12:52:59.763237  721192 start.go:128] duration metric: took 22.983653695s to createHost
	I0916 12:52:59.763258  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHHostname
	I0916 12:52:59.765626  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:59.765960  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:52:59.765999  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:59.766113  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHPort
	I0916 12:52:59.766283  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:52:59.766414  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:52:59.766545  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHUsername
	I0916 12:52:59.766665  721192 main.go:141] libmachine: Using SSH client type: native
	I0916 12:52:59.766830  721192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0916 12:52:59.766844  721192 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 12:52:59.865716  721192 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726491179.839710780
	
	I0916 12:52:59.865745  721192 fix.go:216] guest clock: 1726491179.839710780
	I0916 12:52:59.865757  721192 fix.go:229] Guest: 2024-09-16 12:52:59.83971078 +0000 UTC Remote: 2024-09-16 12:52:59.763247531 +0000 UTC m=+23.081345986 (delta=76.463249ms)
	I0916 12:52:59.865785  721192 fix.go:200] guest clock delta is within tolerance: 76.463249ms
	I0916 12:52:59.865793  721192 start.go:83] releasing machines lock for "addons-682228", held for 23.086290569s
	I0916 12:52:59.865823  721192 main.go:141] libmachine: (addons-682228) Calling .DriverName
	I0916 12:52:59.866053  721192 main.go:141] libmachine: (addons-682228) Calling .GetIP
	I0916 12:52:59.868254  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:59.868582  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:52:59.868606  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:59.868717  721192 main.go:141] libmachine: (addons-682228) Calling .DriverName
	I0916 12:52:59.869196  721192 main.go:141] libmachine: (addons-682228) Calling .DriverName
	I0916 12:52:59.869351  721192 main.go:141] libmachine: (addons-682228) Calling .DriverName
	I0916 12:52:59.869438  721192 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 12:52:59.869499  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHHostname
	I0916 12:52:59.869554  721192 ssh_runner.go:195] Run: cat /version.json
	I0916 12:52:59.869582  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHHostname
	I0916 12:52:59.872081  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:59.872263  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:59.872397  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:52:59.872424  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:59.872635  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:52:59.872652  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHPort
	I0916 12:52:59.872663  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:52:59.872763  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHPort
	I0916 12:52:59.872820  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:52:59.872950  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHUsername
	I0916 12:52:59.872966  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:52:59.873101  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHUsername
	I0916 12:52:59.873104  721192 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228/id_rsa Username:docker}
	I0916 12:52:59.873233  721192 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228/id_rsa Username:docker}
	I0916 12:52:59.946319  721192 ssh_runner.go:195] Run: systemctl --version
	I0916 12:52:59.969936  721192 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 12:53:00.121829  721192 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 12:53:00.128373  721192 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 12:53:00.128457  721192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 12:53:00.144334  721192 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 12:53:00.144370  721192 start.go:495] detecting cgroup driver to use...
	I0916 12:53:00.144457  721192 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 12:53:00.161417  721192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 12:53:00.176360  721192 docker.go:217] disabling cri-docker service (if available) ...
	I0916 12:53:00.176440  721192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 12:53:00.190631  721192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 12:53:00.205009  721192 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 12:53:00.316546  721192 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 12:53:00.472125  721192 docker.go:233] disabling docker service ...
	I0916 12:53:00.472192  721192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 12:53:00.486232  721192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 12:53:00.498511  721192 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 12:53:00.619801  721192 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 12:53:00.736892  721192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 12:53:00.750562  721192 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 12:53:00.768802  721192 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 12:53:00.768883  721192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:53:00.778672  721192 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 12:53:00.778762  721192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:53:00.788507  721192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:53:00.797975  721192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:53:00.807437  721192 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 12:53:00.817106  721192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:53:00.826445  721192 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:53:00.842269  721192 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:53:00.851740  721192 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 12:53:00.860275  721192 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 12:53:00.860320  721192 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 12:53:00.873022  721192 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 12:53:00.881841  721192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 12:53:00.990684  721192 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 12:53:01.075530  721192 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 12:53:01.075635  721192 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 12:53:01.080276  721192 start.go:563] Will wait 60s for crictl version
	I0916 12:53:01.080351  721192 ssh_runner.go:195] Run: which crictl
	I0916 12:53:01.083968  721192 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 12:53:01.122030  721192 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 12:53:01.122148  721192 ssh_runner.go:195] Run: crio --version
	I0916 12:53:01.149263  721192 ssh_runner.go:195] Run: crio --version
	I0916 12:53:01.179421  721192 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 12:53:01.180514  721192 main.go:141] libmachine: (addons-682228) Calling .GetIP
	I0916 12:53:01.183688  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:01.184048  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:53:01.184071  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:01.184284  721192 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 12:53:01.188345  721192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 12:53:01.200465  721192 kubeadm.go:883] updating cluster {Name:addons-682228 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-682228 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 12:53:01.200611  721192 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 12:53:01.200670  721192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 12:53:01.232009  721192 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0916 12:53:01.232079  721192 ssh_runner.go:195] Run: which lz4
	I0916 12:53:01.235918  721192 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 12:53:01.239953  721192 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 12:53:01.239982  721192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0916 12:53:02.536096  721192 crio.go:462] duration metric: took 1.300224141s to copy over tarball
	I0916 12:53:02.536179  721192 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 12:53:04.530170  721192 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.993963291s)
	I0916 12:53:04.530200  721192 crio.go:469] duration metric: took 1.994074153s to extract the tarball
	I0916 12:53:04.530208  721192 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 12:53:04.566618  721192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 12:53:04.608095  721192 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 12:53:04.608129  721192 cache_images.go:84] Images are preloaded, skipping loading
	I0916 12:53:04.608138  721192 kubeadm.go:934] updating node { 192.168.39.232 8443 v1.31.1 crio true true} ...
	I0916 12:53:04.608239  721192 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-682228 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-682228 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 12:53:04.608304  721192 ssh_runner.go:195] Run: crio config
	I0916 12:53:04.653862  721192 cni.go:84] Creating CNI manager for ""
	I0916 12:53:04.653889  721192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 12:53:04.653899  721192 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 12:53:04.653923  721192 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.232 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-682228 NodeName:addons-682228 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.232"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.232 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 12:53:04.654060  721192 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.232
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-682228"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.232
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.232"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 12:53:04.654132  721192 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 12:53:04.663637  721192 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 12:53:04.663714  721192 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 12:53:04.672381  721192 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0916 12:53:04.687960  721192 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 12:53:04.703438  721192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0916 12:53:04.718889  721192 ssh_runner.go:195] Run: grep 192.168.39.232	control-plane.minikube.internal$ /etc/hosts
	I0916 12:53:04.722485  721192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.232	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 12:53:04.733702  721192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 12:53:04.844060  721192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 12:53:04.859395  721192 certs.go:68] Setting up /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228 for IP: 192.168.39.232
	I0916 12:53:04.859421  721192 certs.go:194] generating shared ca certs ...
	I0916 12:53:04.859441  721192 certs.go:226] acquiring lock for ca certs: {Name:mk25b35916ff3ff3777938e3e2b7794965f8a707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:53:04.859585  721192 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key
	I0916 12:53:05.088471  721192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt ...
	I0916 12:53:05.088505  721192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt: {Name:mkc07e8b1def13105de2f4245a61ce4104082bc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:53:05.088681  721192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key ...
	I0916 12:53:05.088692  721192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key: {Name:mk500c720613919135220052691c2bb9ec2be826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:53:05.088763  721192 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key
	I0916 12:53:05.261075  721192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.crt ...
	I0916 12:53:05.261101  721192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.crt: {Name:mk1b086aa33bb05875acf459b1722b7ef72c9e98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:53:05.261241  721192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key ...
	I0916 12:53:05.261251  721192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key: {Name:mk79692ed1e0b7bd0b0ee307d796eba974607bb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:53:05.261319  721192 certs.go:256] generating profile certs ...
	I0916 12:53:05.261394  721192 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/client.key
	I0916 12:53:05.261417  721192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/client.crt with IP's: []
	I0916 12:53:05.372513  721192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/client.crt ...
	I0916 12:53:05.372544  721192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/client.crt: {Name:mkdf07a6a684d1223a7e0a29b2ffa76d38b4c1c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:53:05.372703  721192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/client.key ...
	I0916 12:53:05.372715  721192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/client.key: {Name:mk443732263fbaba4ad72de83282a80315b4559b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:53:05.372780  721192 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/apiserver.key.a5851f2e
	I0916 12:53:05.372798  721192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/apiserver.crt.a5851f2e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.232]
	I0916 12:53:05.511294  721192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/apiserver.crt.a5851f2e ...
	I0916 12:53:05.511323  721192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/apiserver.crt.a5851f2e: {Name:mk08bf0c0aa27ad8687992e8e5fd1ac05be81e5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:53:05.511473  721192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/apiserver.key.a5851f2e ...
	I0916 12:53:05.511486  721192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/apiserver.key.a5851f2e: {Name:mkf4d4fc1fd8fc2f5ab5d86a0f83d00730defc33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:53:05.511553  721192 certs.go:381] copying /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/apiserver.crt.a5851f2e -> /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/apiserver.crt
	I0916 12:53:05.511628  721192 certs.go:385] copying /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/apiserver.key.a5851f2e -> /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/apiserver.key
	I0916 12:53:05.511676  721192 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/proxy-client.key
	I0916 12:53:05.511693  721192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/proxy-client.crt with IP's: []
	I0916 12:53:05.649655  721192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/proxy-client.crt ...
	I0916 12:53:05.649700  721192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/proxy-client.crt: {Name:mkd8ee61fdbe8281195c66eab7237d0949358896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:53:05.649850  721192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/proxy-client.key ...
	I0916 12:53:05.649862  721192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/proxy-client.key: {Name:mk123ba60eaa4201f14f86d29c66ac8e2cc42b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:53:05.650044  721192 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 12:53:05.650093  721192 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem (1082 bytes)
	I0916 12:53:05.650124  721192 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem (1123 bytes)
	I0916 12:53:05.650265  721192 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem (1679 bytes)
	I0916 12:53:05.650900  721192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 12:53:05.684569  721192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 12:53:05.710380  721192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 12:53:05.740372  721192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 12:53:05.762799  721192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 12:53:05.784809  721192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 12:53:05.807981  721192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 12:53:05.830406  721192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/addons-682228/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 12:53:05.852754  721192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 12:53:05.874492  721192 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 12:53:05.889964  721192 ssh_runner.go:195] Run: openssl version
	I0916 12:53:05.895566  721192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 12:53:05.905538  721192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 12:53:05.909711  721192 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 12:53 /usr/share/ca-certificates/minikubeCA.pem
	I0916 12:53:05.909746  721192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 12:53:05.915098  721192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 12:53:05.924847  721192 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 12:53:05.928620  721192 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 12:53:05.928668  721192 kubeadm.go:392] StartCluster: {Name:addons-682228 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-682228 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 12:53:05.928739  721192 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 12:53:05.928773  721192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 12:53:05.963450  721192 cri.go:89] found id: ""
	I0916 12:53:05.963514  721192 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 12:53:05.972650  721192 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 12:53:05.981211  721192 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 12:53:05.989734  721192 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 12:53:05.989750  721192 kubeadm.go:157] found existing configuration files:
	
	I0916 12:53:05.989784  721192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 12:53:05.997758  721192 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 12:53:05.997800  721192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 12:53:06.006254  721192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 12:53:06.014238  721192 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 12:53:06.014289  721192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 12:53:06.022577  721192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 12:53:06.030616  721192 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 12:53:06.030671  721192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 12:53:06.039142  721192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 12:53:06.047143  721192 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 12:53:06.047187  721192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 12:53:06.055448  721192 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 12:53:06.105869  721192 kubeadm.go:310] W0916 12:53:06.087284     821 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 12:53:06.107496  721192 kubeadm.go:310] W0916 12:53:06.089006     821 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 12:53:06.201743  721192 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 12:53:16.148927  721192 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 12:53:16.149012  721192 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 12:53:16.149114  721192 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 12:53:16.149255  721192 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 12:53:16.149354  721192 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 12:53:16.149405  721192 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 12:53:16.150787  721192 out.go:235]   - Generating certificates and keys ...
	I0916 12:53:16.150896  721192 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 12:53:16.150948  721192 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 12:53:16.151005  721192 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 12:53:16.151054  721192 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 12:53:16.151102  721192 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 12:53:16.151149  721192 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 12:53:16.151191  721192 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 12:53:16.151291  721192 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-682228 localhost] and IPs [192.168.39.232 127.0.0.1 ::1]
	I0916 12:53:16.151334  721192 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 12:53:16.151500  721192 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-682228 localhost] and IPs [192.168.39.232 127.0.0.1 ::1]
	I0916 12:53:16.151603  721192 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 12:53:16.151697  721192 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 12:53:16.151765  721192 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 12:53:16.151853  721192 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 12:53:16.151911  721192 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 12:53:16.151981  721192 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 12:53:16.152036  721192 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 12:53:16.152126  721192 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 12:53:16.152208  721192 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 12:53:16.152308  721192 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 12:53:16.152403  721192 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 12:53:16.153794  721192 out.go:235]   - Booting up control plane ...
	I0916 12:53:16.153882  721192 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 12:53:16.153968  721192 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 12:53:16.154054  721192 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 12:53:16.154175  721192 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 12:53:16.154294  721192 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 12:53:16.154356  721192 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 12:53:16.154468  721192 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 12:53:16.154588  721192 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 12:53:16.154663  721192 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.925928ms
	I0916 12:53:16.154724  721192 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 12:53:16.154769  721192 kubeadm.go:310] [api-check] The API server is healthy after 5.501972805s
	I0916 12:53:16.154867  721192 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 12:53:16.154982  721192 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 12:53:16.155029  721192 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 12:53:16.155179  721192 kubeadm.go:310] [mark-control-plane] Marking the node addons-682228 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 12:53:16.155265  721192 kubeadm.go:310] [bootstrap-token] Using token: k626v8.lv6r3mpdvdczfgnh
	I0916 12:53:16.156396  721192 out.go:235]   - Configuring RBAC rules ...
	I0916 12:53:16.156511  721192 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 12:53:16.156591  721192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 12:53:16.156724  721192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 12:53:16.156862  721192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 12:53:16.157009  721192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 12:53:16.157132  721192 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 12:53:16.157244  721192 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 12:53:16.157294  721192 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 12:53:16.157342  721192 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 12:53:16.157348  721192 kubeadm.go:310] 
	I0916 12:53:16.157401  721192 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 12:53:16.157407  721192 kubeadm.go:310] 
	I0916 12:53:16.157498  721192 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 12:53:16.157509  721192 kubeadm.go:310] 
	I0916 12:53:16.157547  721192 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 12:53:16.157628  721192 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 12:53:16.157720  721192 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 12:53:16.157730  721192 kubeadm.go:310] 
	I0916 12:53:16.157796  721192 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 12:53:16.157806  721192 kubeadm.go:310] 
	I0916 12:53:16.157876  721192 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 12:53:16.157884  721192 kubeadm.go:310] 
	I0916 12:53:16.157956  721192 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 12:53:16.158025  721192 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 12:53:16.158119  721192 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 12:53:16.158127  721192 kubeadm.go:310] 
	I0916 12:53:16.158196  721192 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 12:53:16.158298  721192 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 12:53:16.158311  721192 kubeadm.go:310] 
	I0916 12:53:16.158392  721192 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k626v8.lv6r3mpdvdczfgnh \
	I0916 12:53:16.158480  721192 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:40463d1766828cd98d0b3d82eb62b65ad46ddd558da2fd9e3536672d6eade3c0 \
	I0916 12:53:16.158499  721192 kubeadm.go:310] 	--control-plane 
	I0916 12:53:16.158505  721192 kubeadm.go:310] 
	I0916 12:53:16.158581  721192 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 12:53:16.158587  721192 kubeadm.go:310] 
	I0916 12:53:16.158654  721192 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k626v8.lv6r3mpdvdczfgnh \
	I0916 12:53:16.158786  721192 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:40463d1766828cd98d0b3d82eb62b65ad46ddd558da2fd9e3536672d6eade3c0 
	I0916 12:53:16.158812  721192 cni.go:84] Creating CNI manager for ""
	I0916 12:53:16.158824  721192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 12:53:16.160094  721192 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 12:53:16.161195  721192 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 12:53:16.172103  721192 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 12:53:16.189192  721192 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 12:53:16.189314  721192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:53:16.189349  721192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-682228 minikube.k8s.io/updated_at=2024_09_16T12_53_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86 minikube.k8s.io/name=addons-682228 minikube.k8s.io/primary=true
	I0916 12:53:16.215802  721192 ops.go:34] apiserver oom_adj: -16
	I0916 12:53:16.329468  721192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:53:16.830312  721192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:53:17.330371  721192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:53:17.830303  721192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:53:18.329909  721192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:53:18.830007  721192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:53:19.330163  721192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:53:19.830578  721192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:53:20.330367  721192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:53:20.829787  721192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:53:20.928758  721192 kubeadm.go:1113] duration metric: took 4.739496492s to wait for elevateKubeSystemPrivileges
	I0916 12:53:20.928792  721192 kubeadm.go:394] duration metric: took 15.000127974s to StartCluster
	I0916 12:53:20.928814  721192 settings.go:142] acquiring lock: {Name:mka9d51f09298db6ba9006267d9a91b0a28fad59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:53:20.928950  721192 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19652-713072/kubeconfig
	I0916 12:53:20.929394  721192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/kubeconfig: {Name:mk84449075783d20927a7d708361081f8c4a2b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:53:20.929645  721192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 12:53:20.929638  721192 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 12:53:20.929686  721192 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 12:53:20.929894  721192 config.go:182] Loaded profile config "addons-682228": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 12:53:20.929941  721192 addons.go:69] Setting cloud-spanner=true in profile "addons-682228"
	I0916 12:53:20.929952  721192 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-682228"
	I0916 12:53:20.929976  721192 addons.go:234] Setting addon cloud-spanner=true in "addons-682228"
	I0916 12:53:20.929994  721192 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-682228"
	I0916 12:53:20.930010  721192 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-682228"
	I0916 12:53:20.930014  721192 host.go:66] Checking if "addons-682228" exists ...
	I0916 12:53:20.930021  721192 host.go:66] Checking if "addons-682228" exists ...
	I0916 12:53:20.930028  721192 addons.go:69] Setting yakd=true in profile "addons-682228"
	I0916 12:53:20.930029  721192 addons.go:69] Setting gcp-auth=true in profile "addons-682228"
	I0916 12:53:20.930060  721192 addons.go:69] Setting ingress=true in profile "addons-682228"
	I0916 12:53:20.930042  721192 addons.go:69] Setting default-storageclass=true in profile "addons-682228"
	I0916 12:53:20.930078  721192 addons.go:234] Setting addon ingress=true in "addons-682228"
	I0916 12:53:20.930084  721192 mustload.go:65] Loading cluster: addons-682228
	I0916 12:53:20.930086  721192 addons.go:69] Setting storage-provisioner=true in profile "addons-682228"
	I0916 12:53:20.930093  721192 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-682228"
	I0916 12:53:20.930106  721192 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-682228"
	I0916 12:53:20.930113  721192 addons.go:69] Setting volcano=true in profile "addons-682228"
	I0916 12:53:20.930129  721192 addons.go:69] Setting ingress-dns=true in profile "addons-682228"
	I0916 12:53:20.930136  721192 addons.go:234] Setting addon volcano=true in "addons-682228"
	I0916 12:53:20.930141  721192 addons.go:69] Setting volumesnapshots=true in profile "addons-682228"
	I0916 12:53:20.930157  721192 addons.go:234] Setting addon volumesnapshots=true in "addons-682228"
	I0916 12:53:20.930166  721192 addons.go:234] Setting addon ingress-dns=true in "addons-682228"
	I0916 12:53:20.930170  721192 host.go:66] Checking if "addons-682228" exists ...
	I0916 12:53:20.930201  721192 host.go:66] Checking if "addons-682228" exists ...
	I0916 12:53:20.930129  721192 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-682228"
	I0916 12:53:20.930067  721192 addons.go:234] Setting addon yakd=true in "addons-682228"
	I0916 12:53:20.930262  721192 addons.go:69] Setting inspektor-gadget=true in profile "addons-682228"
	I0916 12:53:20.930690  721192 addons.go:234] Setting addon inspektor-gadget=true in "addons-682228"
	I0916 12:53:20.930736  721192 host.go:66] Checking if "addons-682228" exists ...
	I0916 12:53:20.930735  721192 host.go:66] Checking if "addons-682228" exists ...
	I0916 12:53:20.930775  721192 config.go:182] Loaded profile config "addons-682228": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 12:53:20.930078  721192 addons.go:69] Setting registry=true in profile "addons-682228"
	I0916 12:53:20.930882  721192 addons.go:234] Setting addon registry=true in "addons-682228"
	I0916 12:53:20.930918  721192 host.go:66] Checking if "addons-682228" exists ...
	I0916 12:53:20.930206  721192 host.go:66] Checking if "addons-682228" exists ...
	I0916 12:53:20.930041  721192 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-682228"
	I0916 12:53:20.931114  721192 host.go:66] Checking if "addons-682228" exists ...
	I0916 12:53:20.931113  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:20.931196  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:20.931239  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:20.931290  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:20.931292  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:20.931329  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:20.930099  721192 addons.go:234] Setting addon storage-provisioner=true in "addons-682228"
	I0916 12:53:20.931446  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:20.931451  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:20.931491  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:20.931541  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:20.930120  721192 host.go:66] Checking if "addons-682228" exists ...
	I0916 12:53:20.931569  721192 addons.go:69] Setting metrics-server=true in profile "addons-682228"
	I0916 12:53:20.931592  721192 addons.go:234] Setting addon metrics-server=true in "addons-682228"
	I0916 12:53:20.930050  721192 addons.go:69] Setting helm-tiller=true in profile "addons-682228"
	I0916 12:53:20.931705  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:20.931740  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:20.931761  721192 addons.go:234] Setting addon helm-tiller=true in "addons-682228"
	I0916 12:53:20.931808  721192 host.go:66] Checking if "addons-682228" exists ...
	I0916 12:53:20.931883  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:20.931901  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:20.931914  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:20.931929  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:20.931982  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:20.932009  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:20.932086  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:20.932120  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:20.932133  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:20.932155  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:20.932292  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:20.932477  721192 host.go:66] Checking if "addons-682228" exists ...
	I0916 12:53:20.933248  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:20.933265  721192 out.go:177] * Verifying Kubernetes components...
	I0916 12:53:20.933512  721192 host.go:66] Checking if "addons-682228" exists ...
	I0916 12:53:20.933248  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:20.934112  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:20.934919  721192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 12:53:20.951376  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33999
	I0916 12:53:20.951686  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42947
	I0916 12:53:20.952063  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:20.952196  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:20.952618  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:20.952637  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:20.952859  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:20.952877  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:20.952957  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:20.953014  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32943
	I0916 12:53:20.953132  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33971
	I0916 12:53:20.953250  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:20.953322  721192 main.go:141] libmachine: (addons-682228) Calling .GetState
	I0916 12:53:20.953881  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:20.953924  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:20.953997  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:20.954144  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:20.954281  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43393
	I0916 12:53:20.954667  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:20.954693  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:20.954880  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:20.954900  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:20.955206  721192 host.go:66] Checking if "addons-682228" exists ...
	I0916 12:53:20.962282  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:20.962374  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:20.962606  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:20.962651  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:20.962821  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:20.962871  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:20.963070  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:20.963602  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:20.963627  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:20.963661  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:20.963704  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:20.963712  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:20.963711  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:20.963743  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:20.963759  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:20.963967  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:20.964021  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:20.964226  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:20.973855  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35155
	I0916 12:53:20.974472  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:20.974506  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:20.974752  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:20.975344  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:20.975359  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:20.975789  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:20.976005  721192 main.go:141] libmachine: (addons-682228) Calling .GetState
	I0916 12:53:20.979583  721192 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-682228"
	I0916 12:53:20.979623  721192 host.go:66] Checking if "addons-682228" exists ...
	I0916 12:53:20.979979  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:20.980013  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:20.995705  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39293
	I0916 12:53:20.996258  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:20.996799  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:20.996813  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:20.997148  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:20.997751  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:20.997790  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:21.003456  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46679
	I0916 12:53:21.004043  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:21.004634  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:21.004653  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:21.005075  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:21.005738  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:21.005783  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:21.018612  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46631
	I0916 12:53:21.018705  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35803
	I0916 12:53:21.019079  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:21.019622  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:21.019642  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:21.019814  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44359
	I0916 12:53:21.019970  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:21.020553  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:21.020595  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:21.020852  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:21.021386  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:21.021404  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:21.021602  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33687
	I0916 12:53:21.021777  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:21.021827  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:21.022029  721192 main.go:141] libmachine: (addons-682228) Calling .GetState
	I0916 12:53:21.022106  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:21.022334  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43767
	I0916 12:53:21.022583  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:21.022597  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:21.023037  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:21.023617  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:21.023646  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:21.024238  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:21.024254  721192 main.go:141] libmachine: (addons-682228) Calling .DriverName
	I0916 12:53:21.024415  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:21.024435  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:21.024477  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42531
	I0916 12:53:21.024932  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:21.025108  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:21.025127  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:21.025562  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I0916 12:53:21.025645  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:21.025662  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:21.025697  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:21.025738  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:21.026170  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:21.026223  721192 main.go:141] libmachine: (addons-682228) Calling .GetState
	I0916 12:53:21.026275  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:21.026720  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:21.026750  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:21.027086  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:21.027104  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:21.027360  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34149
	I0916 12:53:21.027510  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:21.028256  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I0916 12:53:21.028424  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:21.028448  721192 main.go:141] libmachine: (addons-682228) Calling .GetState
	I0916 12:53:21.028690  721192 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 12:53:21.028902  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:21.028924  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:21.029109  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:21.029123  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:21.029195  721192 main.go:141] libmachine: (addons-682228) Calling .DriverName
	I0916 12:53:21.029530  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:21.029683  721192 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 12:53:21.029704  721192 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 12:53:21.029742  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHHostname
	I0916 12:53:21.030390  721192 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 12:53:21.031513  721192 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 12:53:21.031530  721192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 12:53:21.031548  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHHostname
	I0916 12:53:21.033193  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44509
	I0916 12:53:21.034979  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.035109  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.035517  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:53:21.035559  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.035893  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:53:21.035921  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.036117  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHPort
	I0916 12:53:21.036168  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHPort
	I0916 12:53:21.036311  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:53:21.037136  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHUsername
	I0916 12:53:21.037222  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:21.037392  721192 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228/id_rsa Username:docker}
	I0916 12:53:21.037588  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:53:21.037738  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHUsername
	I0916 12:53:21.037886  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:21.037900  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:21.038053  721192 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228/id_rsa Username:docker}
	I0916 12:53:21.038277  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:21.038468  721192 main.go:141] libmachine: (addons-682228) Calling .GetState
	I0916 12:53:21.040778  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32955
	I0916 12:53:21.041009  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44207
	I0916 12:53:21.041097  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34119
	I0916 12:53:21.041685  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:21.041914  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:21.042006  721192 main.go:141] libmachine: (addons-682228) Calling .DriverName
	I0916 12:53:21.042272  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:21.042288  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:21.042342  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:21.042375  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:21.042392  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:21.042717  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:21.042734  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:21.042722  721192 main.go:141] libmachine: (addons-682228) DBG | Closing plugin on server side
	I0916 12:53:21.042744  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:21.042752  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:21.042900  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:21.042911  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:21.042963  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:21.043122  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:21.043133  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	W0916 12:53:21.043227  721192 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0916 12:53:21.043665  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:21.043807  721192 main.go:141] libmachine: (addons-682228) Calling .GetState
	I0916 12:53:21.044677  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:21.044712  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:21.045127  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:21.045149  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:21.045390  721192 main.go:141] libmachine: (addons-682228) Calling .DriverName
	I0916 12:53:21.046143  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:21.046173  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:21.046701  721192 addons.go:234] Setting addon default-storageclass=true in "addons-682228"
	I0916 12:53:21.046750  721192 host.go:66] Checking if "addons-682228" exists ...
	I0916 12:53:21.047115  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:21.047172  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:21.047488  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:21.047580  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:21.047645  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33381
	I0916 12:53:21.047749  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34637
	I0916 12:53:21.048060  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:21.048240  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:21.048253  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:21.048377  721192 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 12:53:21.048663  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:21.048687  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:21.048822  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:21.049007  721192 main.go:141] libmachine: (addons-682228) Calling .GetState
	I0916 12:53:21.049073  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:21.049420  721192 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 12:53:21.049444  721192 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 12:53:21.049463  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHHostname
	I0916 12:53:21.049678  721192 main.go:141] libmachine: (addons-682228) Calling .DriverName
	I0916 12:53:21.050116  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:21.050161  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:21.050379  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:21.050469  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42415
	I0916 12:53:21.050630  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42313
	I0916 12:53:21.050861  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:21.051384  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:21.051400  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:21.051472  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:21.051952  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:21.051965  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:21.052031  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:21.052290  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:21.052474  721192 main.go:141] libmachine: (addons-682228) Calling .GetState
	I0916 12:53:21.052538  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.052579  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:21.052760  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:21.052788  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:21.052801  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:21.052874  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:53:21.052887  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.052914  721192 main.go:141] libmachine: (addons-682228) Calling .DriverName
	I0916 12:53:21.053420  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:21.053630  721192 main.go:141] libmachine: (addons-682228) Calling .GetState
	I0916 12:53:21.054143  721192 main.go:141] libmachine: (addons-682228) Calling .DriverName
	I0916 12:53:21.054144  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHPort
	I0916 12:53:21.054353  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:53:21.054508  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHUsername
	I0916 12:53:21.054524  721192 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 12:53:21.054664  721192 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228/id_rsa Username:docker}
	I0916 12:53:21.055570  721192 main.go:141] libmachine: (addons-682228) Calling .DriverName
	I0916 12:53:21.055876  721192 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 12:53:21.055894  721192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 12:53:21.055927  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHHostname
	I0916 12:53:21.055984  721192 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 12:53:21.057444  721192 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 12:53:21.057451  721192 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 12:53:21.057557  721192 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 12:53:21.057581  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHHostname
	I0916 12:53:21.059310  721192 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 12:53:21.060258  721192 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 12:53:21.060274  721192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 12:53:21.060292  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHHostname
	I0916 12:53:21.060401  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.060743  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:53:21.060764  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.060799  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.061410  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:53:21.061433  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.061471  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHPort
	I0916 12:53:21.061573  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHPort
	I0916 12:53:21.061641  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:53:21.061724  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:53:21.061905  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHUsername
	I0916 12:53:21.061954  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHUsername
	I0916 12:53:21.062160  721192 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228/id_rsa Username:docker}
	I0916 12:53:21.062576  721192 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228/id_rsa Username:docker}
	I0916 12:53:21.063590  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.063926  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:53:21.063950  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.064177  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHPort
	I0916 12:53:21.064346  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:53:21.064490  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHUsername
	I0916 12:53:21.064642  721192 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228/id_rsa Username:docker}
	I0916 12:53:21.071535  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40643
	I0916 12:53:21.071997  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:21.072498  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:21.072517  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:21.072872  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:21.073407  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:21.073452  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:21.075392  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35631
	I0916 12:53:21.075826  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:21.076248  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:21.076264  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:21.076653  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:21.076716  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37447
	I0916 12:53:21.076849  721192 main.go:141] libmachine: (addons-682228) Calling .GetState
	I0916 12:53:21.077191  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:21.078372  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:21.078394  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:21.078698  721192 main.go:141] libmachine: (addons-682228) Calling .DriverName
	I0916 12:53:21.078759  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:21.078916  721192 main.go:141] libmachine: (addons-682228) Calling .GetState
	I0916 12:53:21.080408  721192 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 12:53:21.080521  721192 main.go:141] libmachine: (addons-682228) Calling .DriverName
	I0916 12:53:21.081905  721192 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 12:53:21.082127  721192 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 12:53:21.082142  721192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 12:53:21.082162  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHHostname
	I0916 12:53:21.083958  721192 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 12:53:21.084314  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41563
	I0916 12:53:21.084765  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:21.085267  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:21.085301  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:21.085614  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.085688  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:21.086184  721192 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 12:53:21.087340  721192 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 12:53:21.088367  721192 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 12:53:21.089072  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45977
	I0916 12:53:21.089098  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41539
	I0916 12:53:21.089072  721192 main.go:141] libmachine: (addons-682228) Calling .GetState
	I0916 12:53:21.089704  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHPort
	I0916 12:53:21.089778  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:21.089858  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:53:21.089875  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.089907  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:21.089977  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:53:21.090145  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHUsername
	I0916 12:53:21.090303  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:21.090318  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:21.090358  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:21.090373  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:21.090526  721192 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228/id_rsa Username:docker}
	I0916 12:53:21.090855  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:21.090873  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:21.091050  721192 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 12:53:21.091059  721192 main.go:141] libmachine: (addons-682228) Calling .GetState
	I0916 12:53:21.091110  721192 main.go:141] libmachine: (addons-682228) Calling .GetState
	I0916 12:53:21.091468  721192 main.go:141] libmachine: (addons-682228) Calling .DriverName
	I0916 12:53:21.092814  721192 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 12:53:21.092828  721192 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 12:53:21.093050  721192 main.go:141] libmachine: (addons-682228) Calling .DriverName
	I0916 12:53:21.094099  721192 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 12:53:21.094118  721192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 12:53:21.094135  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHHostname
	I0916 12:53:21.094664  721192 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 12:53:21.094678  721192 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 12:53:21.094703  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33397
	I0916 12:53:21.095240  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:21.095706  721192 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 12:53:21.095740  721192 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 12:53:21.095763  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHHostname
	I0916 12:53:21.095707  721192 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 12:53:21.095827  721192 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 12:53:21.095841  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHHostname
	I0916 12:53:21.096160  721192 main.go:141] libmachine: (addons-682228) Calling .DriverName
	I0916 12:53:21.096238  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:21.096258  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:21.096910  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:21.097482  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.097483  721192 out.go:177]   - Using image docker.io/busybox:stable
	I0916 12:53:21.097712  721192 main.go:141] libmachine: (addons-682228) Calling .GetState
	I0916 12:53:21.097937  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:53:21.097959  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.098132  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHPort
	I0916 12:53:21.098274  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:53:21.098372  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHUsername
	I0916 12:53:21.098474  721192 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228/id_rsa Username:docker}
	I0916 12:53:21.098930  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.099419  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:53:21.099445  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.099479  721192 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 12:53:21.099576  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHPort
	I0916 12:53:21.099734  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:53:21.099901  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHUsername
	I0916 12:53:21.100054  721192 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228/id_rsa Username:docker}
	I0916 12:53:21.100491  721192 main.go:141] libmachine: (addons-682228) Calling .DriverName
	I0916 12:53:21.100574  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.100690  721192 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 12:53:21.100703  721192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 12:53:21.100719  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHHostname
	I0916 12:53:21.100981  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:53:21.101004  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.101137  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHPort
	I0916 12:53:21.101291  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:53:21.101381  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHUsername
	I0916 12:53:21.101547  721192 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228/id_rsa Username:docker}
	I0916 12:53:21.101755  721192 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0916 12:53:21.102803  721192 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 12:53:21.102824  721192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0916 12:53:21.102841  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHHostname
	I0916 12:53:21.103481  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.103887  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:53:21.103909  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.104051  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHPort
	I0916 12:53:21.104241  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:53:21.104393  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHUsername
	I0916 12:53:21.104526  721192 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228/id_rsa Username:docker}
	I0916 12:53:21.105656  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.106074  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:53:21.106105  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.106243  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHPort
	I0916 12:53:21.106416  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:53:21.106529  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHUsername
	I0916 12:53:21.106708  721192 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228/id_rsa Username:docker}
	I0916 12:53:21.109476  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41429
	I0916 12:53:21.109843  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:21.110493  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:21.110515  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:21.110929  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:21.111108  721192 main.go:141] libmachine: (addons-682228) Calling .GetState
	I0916 12:53:21.111150  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39695
	I0916 12:53:21.111981  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:21.112742  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:21.112758  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:21.112799  721192 main.go:141] libmachine: (addons-682228) Calling .DriverName
	I0916 12:53:21.113027  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:21.113142  721192 main.go:141] libmachine: (addons-682228) Calling .GetState
	I0916 12:53:21.114453  721192 main.go:141] libmachine: (addons-682228) Calling .DriverName
	I0916 12:53:21.114635  721192 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0916 12:53:21.114756  721192 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 12:53:21.114770  721192 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 12:53:21.114793  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHHostname
	I0916 12:53:21.116756  721192 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 12:53:21.116882  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.117293  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:53:21.117320  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.117458  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHPort
	I0916 12:53:21.117632  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:53:21.117763  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHUsername
	I0916 12:53:21.117870  721192 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228/id_rsa Username:docker}
	I0916 12:53:21.119084  721192 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 12:53:21.120210  721192 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 12:53:21.120222  721192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 12:53:21.120235  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHHostname
	I0916 12:53:21.123031  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.123332  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:53:21.123355  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:21.123611  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHPort
	I0916 12:53:21.123760  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:53:21.123872  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHUsername
	I0916 12:53:21.123971  721192 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228/id_rsa Username:docker}
	I0916 12:53:21.420372  721192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 12:53:21.480128  721192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 12:53:21.544519  721192 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 12:53:21.544549  721192 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 12:53:21.554324  721192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 12:53:21.564376  721192 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 12:53:21.564399  721192 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 12:53:21.572274  721192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 12:53:21.595732  721192 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 12:53:21.595762  721192 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 12:53:21.622368  721192 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 12:53:21.622398  721192 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 12:53:21.664270  721192 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 12:53:21.664297  721192 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 12:53:21.680504  721192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 12:53:21.696091  721192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 12:53:21.696200  721192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 12:53:21.701211  721192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 12:53:21.703887  721192 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 12:53:21.703908  721192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 12:53:21.716907  721192 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 12:53:21.716932  721192 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 12:53:21.726944  721192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 12:53:21.758655  721192 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 12:53:21.758682  721192 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 12:53:21.807134  721192 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 12:53:21.807159  721192 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 12:53:21.826123  721192 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 12:53:21.826147  721192 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 12:53:21.842703  721192 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 12:53:21.842723  721192 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 12:53:21.846738  721192 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 12:53:21.846757  721192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 12:53:21.871706  721192 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 12:53:21.871729  721192 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 12:53:21.897017  721192 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 12:53:21.897050  721192 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 12:53:21.960063  721192 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 12:53:21.960094  721192 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 12:53:22.079538  721192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 12:53:22.130619  721192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 12:53:22.138089  721192 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 12:53:22.138126  721192 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 12:53:22.146005  721192 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 12:53:22.146026  721192 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 12:53:22.147926  721192 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 12:53:22.147948  721192 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 12:53:22.158535  721192 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 12:53:22.158551  721192 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 12:53:22.241085  721192 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 12:53:22.241130  721192 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 12:53:22.351380  721192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 12:53:22.355724  721192 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 12:53:22.355751  721192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 12:53:22.423953  721192 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 12:53:22.424001  721192 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 12:53:22.447251  721192 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 12:53:22.447273  721192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 12:53:22.493289  721192 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 12:53:22.493322  721192 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 12:53:22.640861  721192 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 12:53:22.640900  721192 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 12:53:22.645465  721192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 12:53:22.670365  721192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 12:53:22.803302  721192 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 12:53:22.803341  721192 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 12:53:22.908929  721192 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 12:53:22.908963  721192 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 12:53:23.135470  721192 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 12:53:23.135494  721192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 12:53:23.137962  721192 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 12:53:23.137983  721192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 12:53:23.398728  721192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 12:53:23.502863  721192 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 12:53:23.502905  721192 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 12:53:23.678533  721192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.258117797s)
	I0916 12:53:23.678592  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:23.678605  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:23.678535  721192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.198372191s)
	I0916 12:53:23.678720  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:23.678740  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:23.678974  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:23.679069  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:23.679081  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:23.679090  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:23.679119  721192 main.go:141] libmachine: (addons-682228) DBG | Closing plugin on server side
	I0916 12:53:23.679179  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:23.679197  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:23.679207  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:23.679219  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:23.679282  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:23.679311  721192 main.go:141] libmachine: (addons-682228) DBG | Closing plugin on server side
	I0916 12:53:23.679317  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:23.680597  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:23.680612  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:23.680628  721192 main.go:141] libmachine: (addons-682228) DBG | Closing plugin on server side
	I0916 12:53:23.711566  721192 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 12:53:23.711592  721192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 12:53:24.084556  721192 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 12:53:24.084582  721192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 12:53:24.375477  721192 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 12:53:24.375508  721192 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 12:53:24.713058  721192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 12:53:25.907333  721192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.352962946s)
	I0916 12:53:25.907397  721192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.335090705s)
	I0916 12:53:25.907407  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:25.907436  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:25.907457  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:25.907526  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:25.907770  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:25.907788  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:25.907805  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:25.907815  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:25.907830  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:25.907845  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:25.908040  721192 main.go:141] libmachine: (addons-682228) DBG | Closing plugin on server side
	I0916 12:53:25.908062  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:25.908072  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:25.908076  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:25.908080  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:25.909502  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:25.909502  721192 main.go:141] libmachine: (addons-682228) DBG | Closing plugin on server side
	I0916 12:53:25.909518  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:28.114939  721192 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 12:53:28.114988  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHHostname
	I0916 12:53:28.118229  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:28.118664  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:53:28.118698  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:28.118912  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHPort
	I0916 12:53:28.119142  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:53:28.119346  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHUsername
	I0916 12:53:28.119506  721192 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228/id_rsa Username:docker}
	I0916 12:53:28.306013  721192 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 12:53:28.362959  721192 addons.go:234] Setting addon gcp-auth=true in "addons-682228"
	I0916 12:53:28.363029  721192 host.go:66] Checking if "addons-682228" exists ...
	I0916 12:53:28.363470  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:28.363520  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:28.379663  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37657
	I0916 12:53:28.380206  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:28.380768  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:28.380800  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:28.381164  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:28.381623  721192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 12:53:28.381659  721192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 12:53:28.397733  721192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46019
	I0916 12:53:28.398144  721192 main.go:141] libmachine: () Calling .GetVersion
	I0916 12:53:28.398774  721192 main.go:141] libmachine: Using API Version  1
	I0916 12:53:28.398805  721192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 12:53:28.399169  721192 main.go:141] libmachine: () Calling .GetMachineName
	I0916 12:53:28.399584  721192 main.go:141] libmachine: (addons-682228) Calling .GetState
	I0916 12:53:28.401380  721192 main.go:141] libmachine: (addons-682228) Calling .DriverName
	I0916 12:53:28.401614  721192 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 12:53:28.401642  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHHostname
	I0916 12:53:28.404351  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:28.404787  721192 main.go:141] libmachine: (addons-682228) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7f:50", ip: ""} in network mk-addons-682228: {Iface:virbr1 ExpiryTime:2024-09-16 13:52:51 +0000 UTC Type:0 Mac:52:54:00:67:7f:50 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-682228 Clientid:01:52:54:00:67:7f:50}
	I0916 12:53:28.404818  721192 main.go:141] libmachine: (addons-682228) DBG | domain addons-682228 has defined IP address 192.168.39.232 and MAC address 52:54:00:67:7f:50 in network mk-addons-682228
	I0916 12:53:28.404964  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHPort
	I0916 12:53:28.405142  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHKeyPath
	I0916 12:53:28.405289  721192 main.go:141] libmachine: (addons-682228) Calling .GetSSHUsername
	I0916 12:53:28.405450  721192 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/addons-682228/id_rsa Username:docker}
	I0916 12:53:29.485841  721192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.805297409s)
	I0916 12:53:29.485893  721192 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.789764938s)
	I0916 12:53:29.485934  721192 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.789707111s)
	I0916 12:53:29.485956  721192 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0916 12:53:29.485902  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:29.486037  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:29.486114  721192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.784871353s)
	I0916 12:53:29.486166  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:29.486182  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:29.486201  721192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.759232767s)
	I0916 12:53:29.486231  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:29.486243  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:29.486272  721192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.406684891s)
	I0916 12:53:29.486284  721192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.35563174s)
	I0916 12:53:29.486304  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:29.486307  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:29.486323  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:29.486314  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:29.486385  721192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.134967161s)
	I0916 12:53:29.486403  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:29.486413  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:29.486522  721192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.841025973s)
	W0916 12:53:29.486574  721192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 12:53:29.486602  721192 retry.go:31] will retry after 227.095542ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 12:53:29.486665  721192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.816262131s)
	I0916 12:53:29.486686  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:29.486698  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:29.486798  721192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.088025968s)
	I0916 12:53:29.486838  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:29.486850  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:29.486924  721192 node_ready.go:35] waiting up to 6m0s for node "addons-682228" to be "Ready" ...
	I0916 12:53:29.487004  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:29.487017  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:29.487013  721192 main.go:141] libmachine: (addons-682228) DBG | Closing plugin on server side
	I0916 12:53:29.487027  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:29.487034  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:29.487045  721192 main.go:141] libmachine: (addons-682228) DBG | Closing plugin on server side
	I0916 12:53:29.487055  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:29.487067  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:29.487076  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:29.487082  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:29.487342  721192 main.go:141] libmachine: (addons-682228) DBG | Closing plugin on server side
	I0916 12:53:29.487368  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:29.487375  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:29.487383  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:29.487390  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:29.487466  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:29.487489  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:29.487526  721192 addons.go:475] Verifying addon metrics-server=true in "addons-682228"
	I0916 12:53:29.487615  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:29.487649  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:29.487669  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:29.487685  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:29.487768  721192 main.go:141] libmachine: (addons-682228) DBG | Closing plugin on server side
	I0916 12:53:29.487816  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:29.487835  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:29.487852  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:29.487880  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:29.487947  721192 main.go:141] libmachine: (addons-682228) DBG | Closing plugin on server side
	I0916 12:53:29.487998  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:29.488025  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:29.488032  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:29.488059  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:29.488253  721192 main.go:141] libmachine: (addons-682228) DBG | Closing plugin on server side
	I0916 12:53:29.488285  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:29.488290  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:29.488297  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:29.488303  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:29.488351  721192 main.go:141] libmachine: (addons-682228) DBG | Closing plugin on server side
	I0916 12:53:29.488369  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:29.488390  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:29.488399  721192 addons.go:475] Verifying addon registry=true in "addons-682228"
	I0916 12:53:29.488474  721192 main.go:141] libmachine: (addons-682228) DBG | Closing plugin on server side
	I0916 12:53:29.488504  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:29.488511  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:29.488044  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:29.489379  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:29.489453  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:29.489460  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:29.489468  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:29.489473  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:29.489539  721192 main.go:141] libmachine: (addons-682228) DBG | Closing plugin on server side
	I0916 12:53:29.489557  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:29.489563  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:29.488006  721192 main.go:141] libmachine: (addons-682228) DBG | Closing plugin on server side
	I0916 12:53:29.490059  721192 main.go:141] libmachine: (addons-682228) DBG | Closing plugin on server side
	I0916 12:53:29.490084  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:29.490093  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:29.490148  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:29.490163  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:29.490646  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:29.490659  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:29.490668  721192 addons.go:475] Verifying addon ingress=true in "addons-682228"
	I0916 12:53:29.491550  721192 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-682228 service yakd-dashboard -n yakd-dashboard
	
	I0916 12:53:29.491579  721192 out.go:177] * Verifying registry addon...
	I0916 12:53:29.492209  721192 out.go:177] * Verifying ingress addon...
	I0916 12:53:29.494763  721192 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0916 12:53:29.494770  721192 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 12:53:29.502473  721192 node_ready.go:49] node "addons-682228" has status "Ready":"True"
	I0916 12:53:29.502501  721192 node_ready.go:38] duration metric: took 15.548342ms for node "addons-682228" to be "Ready" ...
	I0916 12:53:29.502513  721192 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 12:53:29.508625  721192 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 12:53:29.508647  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:29.509034  721192 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 12:53:29.509060  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:29.526641  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:29.526661  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:29.526911  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:29.526934  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	W0916 12:53:29.527041  721192 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0916 12:53:29.536420  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:29.536441  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:29.536804  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:29.536822  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:29.551831  721192 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-h2drv" in "kube-system" namespace to be "Ready" ...
	I0916 12:53:29.572924  721192 pod_ready.go:93] pod "coredns-7c65d6cfc9-h2drv" in "kube-system" namespace has status "Ready":"True"
	I0916 12:53:29.572956  721192 pod_ready.go:82] duration metric: took 21.094276ms for pod "coredns-7c65d6cfc9-h2drv" in "kube-system" namespace to be "Ready" ...
	I0916 12:53:29.572970  721192 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-slw5b" in "kube-system" namespace to be "Ready" ...
	I0916 12:53:29.617612  721192 pod_ready.go:93] pod "coredns-7c65d6cfc9-slw5b" in "kube-system" namespace has status "Ready":"True"
	I0916 12:53:29.617634  721192 pod_ready.go:82] duration metric: took 44.656638ms for pod "coredns-7c65d6cfc9-slw5b" in "kube-system" namespace to be "Ready" ...
	I0916 12:53:29.617644  721192 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-682228" in "kube-system" namespace to be "Ready" ...
	I0916 12:53:29.714830  721192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 12:53:29.990610  721192 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-682228" context rescaled to 1 replicas
	I0916 12:53:30.000295  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:30.000651  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:30.350921  721192 pod_ready.go:93] pod "etcd-addons-682228" in "kube-system" namespace has status "Ready":"True"
	I0916 12:53:30.350943  721192 pod_ready.go:82] duration metric: took 733.29301ms for pod "etcd-addons-682228" in "kube-system" namespace to be "Ready" ...
	I0916 12:53:30.350954  721192 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-682228" in "kube-system" namespace to be "Ready" ...
	I0916 12:53:30.357384  721192 pod_ready.go:93] pod "kube-apiserver-addons-682228" in "kube-system" namespace has status "Ready":"True"
	I0916 12:53:30.357400  721192 pod_ready.go:82] duration metric: took 6.44038ms for pod "kube-apiserver-addons-682228" in "kube-system" namespace to be "Ready" ...
	I0916 12:53:30.357409  721192 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-682228" in "kube-system" namespace to be "Ready" ...
	I0916 12:53:30.361329  721192 pod_ready.go:93] pod "kube-controller-manager-addons-682228" in "kube-system" namespace has status "Ready":"True"
	I0916 12:53:30.361350  721192 pod_ready.go:82] duration metric: took 3.934504ms for pod "kube-controller-manager-addons-682228" in "kube-system" namespace to be "Ready" ...
	I0916 12:53:30.361360  721192 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8bs4z" in "kube-system" namespace to be "Ready" ...
	I0916 12:53:30.502661  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:30.502970  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:30.691656  721192 pod_ready.go:93] pod "kube-proxy-8bs4z" in "kube-system" namespace has status "Ready":"True"
	I0916 12:53:30.691680  721192 pod_ready.go:82] duration metric: took 330.313145ms for pod "kube-proxy-8bs4z" in "kube-system" namespace to be "Ready" ...
	I0916 12:53:30.691694  721192 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-682228" in "kube-system" namespace to be "Ready" ...
	I0916 12:53:31.001300  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:31.011051  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:31.105225  721192 pod_ready.go:93] pod "kube-scheduler-addons-682228" in "kube-system" namespace has status "Ready":"True"
	I0916 12:53:31.105255  721192 pod_ready.go:82] duration metric: took 413.552985ms for pod "kube-scheduler-addons-682228" in "kube-system" namespace to be "Ready" ...
	I0916 12:53:31.105270  721192 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace to be "Ready" ...
	I0916 12:53:31.173921  721192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.460804452s)
	I0916 12:53:31.173970  721192 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.772337314s)
	I0916 12:53:31.174001  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:31.174020  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:31.174311  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:31.174334  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:31.174345  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:31.174355  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:31.174390  721192 main.go:141] libmachine: (addons-682228) DBG | Closing plugin on server side
	I0916 12:53:31.174623  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:31.174673  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:31.174688  721192 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-682228"
	I0916 12:53:31.174653  721192 main.go:141] libmachine: (addons-682228) DBG | Closing plugin on server side
	I0916 12:53:31.175525  721192 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 12:53:31.176196  721192 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 12:53:31.177524  721192 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 12:53:31.178707  721192 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 12:53:31.178726  721192 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 12:53:31.178753  721192 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 12:53:31.196453  721192 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 12:53:31.196471  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:31.238075  721192 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 12:53:31.238099  721192 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 12:53:31.282309  721192 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 12:53:31.282336  721192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 12:53:31.329751  721192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 12:53:31.501614  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:31.501783  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:31.648585  721192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.933693796s)
	I0916 12:53:31.648668  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:31.648696  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:31.648971  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:31.648990  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:31.648999  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:31.649006  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:31.649008  721192 main.go:141] libmachine: (addons-682228) DBG | Closing plugin on server side
	I0916 12:53:31.649319  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:31.649328  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:31.683896  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:32.000742  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:32.001062  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:32.184626  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:32.618598  721192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.288795332s)
	I0916 12:53:32.618677  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:32.618706  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:32.619124  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:32.619145  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:32.619148  721192 main.go:141] libmachine: (addons-682228) DBG | Closing plugin on server side
	I0916 12:53:32.619161  721192 main.go:141] libmachine: Making call to close driver server
	I0916 12:53:32.619179  721192 main.go:141] libmachine: (addons-682228) Calling .Close
	I0916 12:53:32.619531  721192 main.go:141] libmachine: Successfully made call to close driver server
	I0916 12:53:32.619548  721192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 12:53:32.621143  721192 addons.go:475] Verifying addon gcp-auth=true in "addons-682228"
	I0916 12:53:32.623669  721192 out.go:177] * Verifying gcp-auth addon...
	I0916 12:53:32.625792  721192 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 12:53:32.658132  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:32.658309  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:32.671440  721192 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 12:53:32.671461  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:32.757326  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:33.000732  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:33.000739  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:33.113263  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:53:33.128866  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:33.184749  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:33.500629  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:33.500690  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:33.633534  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:33.731064  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:33.999536  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:33.999803  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:34.128986  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:34.183831  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:34.500056  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:34.500542  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:34.629185  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:34.683064  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:34.999482  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:34.999979  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:35.129881  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:35.184047  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:35.498762  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:35.499212  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:35.610585  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:53:35.628929  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:35.683945  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:36.061095  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:36.061148  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:36.129819  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:36.185270  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:36.499483  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:36.499708  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:36.629845  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:36.683258  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:37.000231  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:37.000832  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:37.128917  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:37.184192  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:37.502568  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:37.502770  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:37.611429  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:53:37.628869  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:37.683599  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:37.999966  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:38.000203  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:38.128251  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:38.183605  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:38.498814  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:38.498881  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:38.629080  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:38.683096  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:39.002332  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:39.002662  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:39.129847  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:39.183977  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:39.498984  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:39.500788  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:39.611626  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:53:39.629312  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:39.683736  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:39.999965  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:40.000421  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:40.129454  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:40.183468  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:40.501914  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:40.504070  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:40.629067  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:40.684111  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:41.000562  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:41.000562  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:41.129336  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:41.183282  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:41.500894  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:41.501783  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:41.629291  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:41.683484  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:41.999501  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:41.999654  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:42.111505  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:53:42.128662  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:42.183699  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:42.499554  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:42.500056  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:42.629919  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:42.684049  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:43.035722  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:43.036045  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:43.128386  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:43.183578  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:43.499955  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:43.500584  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:43.629581  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:43.683293  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:44.000189  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:44.000314  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:44.128442  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:44.182828  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:44.498801  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:44.500317  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:44.611853  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:53:44.629430  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:44.683216  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:44.999803  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:45.000172  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:45.128967  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:45.183812  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:45.499839  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:45.500203  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:45.629224  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:45.682807  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:45.999265  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:46.000928  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:46.130930  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:46.232261  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:46.499668  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:46.501013  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:46.628775  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:46.683172  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:46.999319  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:46.999448  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:47.111275  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:53:47.128605  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:47.184736  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:47.499431  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:47.501816  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:47.629079  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:47.683041  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:47.999757  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:47.999996  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:48.129325  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:48.184128  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:48.499267  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:48.499588  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:48.629123  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:48.684138  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:48.999597  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:48.999882  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:49.112561  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:53:49.129897  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:49.183537  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:49.498778  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:49.500115  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:49.631542  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:49.683515  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:50.424521  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:50.425090  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:50.426125  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:50.427454  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:50.505570  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:50.509845  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:50.629168  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:50.684497  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:51.000173  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:51.000514  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:51.115028  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:53:51.131644  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:51.183768  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:51.498966  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:51.500310  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:51.628989  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:51.685036  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:51.999506  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:52.002124  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:52.130259  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:52.184618  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:52.500036  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:52.500203  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:52.628553  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:52.683301  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:52.999860  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:53.001539  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:53.543715  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:53.543875  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:53.544090  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:53.544524  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:53.545028  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:53:53.629075  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:53.683791  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:54.000016  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:54.000163  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:54.132139  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:54.182948  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:54.500344  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:54.500533  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:54.630922  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:54.683928  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:54.998966  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:54.999958  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:55.133306  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:55.183838  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:55.502711  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:55.502965  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:55.611304  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:53:55.629496  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:55.731988  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:56.000287  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:56.000411  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:56.128767  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:56.183560  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:56.506672  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:56.506822  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:56.631292  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:56.684311  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:57.000253  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:57.000428  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:57.129765  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:57.183880  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:57.501142  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:57.501294  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:57.629442  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:57.682875  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:58.000041  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:58.000147  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:58.110936  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:53:58.129659  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:58.183902  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:58.499849  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:58.500872  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:58.628840  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:58.683668  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:58.999522  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:58.999804  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:59.128610  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:59.183791  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:53:59.500008  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:53:59.500286  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:53:59.629656  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:53:59.731280  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:00.016051  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:54:00.016121  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:00.117007  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:54:00.133533  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:00.190621  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:00.502819  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:54:00.504199  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:00.630010  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:00.684504  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:01.001200  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:54:01.001731  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:01.129326  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:01.184076  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:01.499047  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:54:01.499522  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:01.629890  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:01.684862  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:01.999607  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:54:01.999783  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:02.129288  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:02.182988  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:02.499564  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:02.500781  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:54:02.610968  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:54:02.629148  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:02.684807  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:02.999430  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:02.999850  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:54:03.128970  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:03.183677  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:03.499249  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:54:03.500246  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:04.042780  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:04.043323  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:54:04.043621  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:04.043740  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:04.129356  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:04.183584  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:04.499453  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:54:04.500778  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:04.611737  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:54:04.630579  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:04.683717  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:05.000598  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:54:05.001383  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:05.130854  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:05.196891  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:05.500069  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:54:05.501855  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:05.628689  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:05.683549  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:06.000696  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:54:06.000933  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:06.297926  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:06.298267  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:06.499587  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:54:06.500070  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:06.612101  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:54:06.628563  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:06.683834  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:06.999023  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:54:06.999560  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:07.128761  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:07.183199  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:07.500725  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 12:54:07.501419  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:07.628816  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:07.683314  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:08.001046  721192 kapi.go:107] duration metric: took 38.506270607s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 12:54:08.001210  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:08.129483  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:08.183454  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:08.498828  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:08.628625  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:08.683342  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:09.111975  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:09.113946  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:54:09.129004  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:09.183559  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:09.499342  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:09.629261  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:09.683443  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:10.334479  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:10.335878  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:10.336284  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:10.499043  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:10.628862  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:10.683653  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:11.003206  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:11.128791  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:11.183015  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:11.499684  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:11.611759  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:54:11.629068  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:11.682671  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:12.001363  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:12.128906  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:12.183938  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:12.499159  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:12.628875  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:12.683311  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:13.002810  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:13.140305  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:13.580390  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:13.580467  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:13.615117  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:54:13.629872  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:13.684996  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:13.999831  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:14.130487  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:14.231281  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:14.499609  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:14.629925  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:14.686309  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:14.999186  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:15.131715  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:15.183939  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:15.498944  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:15.618015  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:54:15.629921  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:15.683399  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:15.999647  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:16.128518  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:16.186042  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:16.499767  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:16.629345  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:16.682902  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:17.000317  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:17.129487  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:17.230550  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:17.499162  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:17.629504  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:17.684318  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:17.999020  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:18.110947  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:54:18.129586  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:18.183706  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:18.850746  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:18.851589  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:18.852978  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:18.999677  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:19.130081  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:19.231259  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:19.501386  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:19.629727  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:19.693739  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:20.004289  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:20.114472  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:54:20.130500  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:20.194827  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:20.499320  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:20.629718  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:20.683803  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:20.998596  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:21.129220  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:21.182694  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:21.498945  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:21.628997  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:21.688896  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:21.998919  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:22.129172  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:22.182707  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:22.499696  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:22.611246  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:54:22.628445  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:22.683582  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:22.999173  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:23.131525  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:23.183564  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:23.500520  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:23.629221  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:23.682743  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:23.999484  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:24.128830  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:24.183671  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:24.499755  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:24.612121  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:54:24.628183  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:24.683874  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:24.999189  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:25.128814  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:25.184201  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:25.499931  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:25.629708  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:25.684512  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:25.998854  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:26.129340  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:26.183344  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:26.500410  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:26.629773  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:26.683469  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:27.049851  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:27.123988  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:54:27.149365  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:27.251597  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 12:54:27.499235  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:27.629774  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:27.732431  721192 kapi.go:107] duration metric: took 56.553674552s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 12:54:27.999813  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:28.130581  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:28.499481  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:28.628186  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:28.999457  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:29.129417  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:29.503272  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:29.611469  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:54:29.628648  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:29.998630  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:30.128775  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:30.499568  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:30.629073  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:30.999644  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:31.129957  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:31.498575  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:31.611657  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:54:31.628855  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:31.999568  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:32.128938  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:32.498865  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:32.629327  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:32.999412  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:33.128591  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:33.499864  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:33.618066  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:54:33.629423  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:34.001325  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:34.128833  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:34.499102  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:34.629283  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:34.999594  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:35.129516  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:35.499431  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:35.629592  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:36.007382  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:36.112973  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:54:36.129950  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:36.498969  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:36.628873  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:36.999234  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:37.129934  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:37.509071  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:37.630570  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:37.999590  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:38.178239  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:38.499545  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:38.611978  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:54:38.629789  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:38.999435  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:39.128861  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:39.499701  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:39.629190  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:39.999734  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:40.129791  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:40.498865  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:40.629459  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:40.999387  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:41.110903  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:54:41.129731  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:41.499574  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:41.628617  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:41.998501  721192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 12:54:42.130603  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:42.500779  721192 kapi.go:107] duration metric: took 1m13.006013027s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 12:54:42.641081  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:43.112114  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:54:43.130779  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:43.628784  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:44.130088  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:44.628349  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:45.131422  721192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 12:54:45.610815  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:54:45.629203  721192 kapi.go:107] duration metric: took 1m13.003411201s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 12:54:45.630691  721192 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-682228 cluster.
	I0916 12:54:45.631593  721192 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 12:54:45.632437  721192 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 12:54:45.633409  721192 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, metrics-server, inspektor-gadget, helm-tiller, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0916 12:54:45.634577  721192 addons.go:510] duration metric: took 1m24.70491612s for enable addons: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns metrics-server inspektor-gadget helm-tiller yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0916 12:54:47.611410  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:54:50.111899  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:54:52.114327  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:54:54.611788  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:54:56.612038  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:54:59.112576  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:55:01.612030  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:55:04.110859  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:55:06.111518  721192 pod_ready.go:103] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"False"
	I0916 12:55:07.613659  721192 pod_ready.go:93] pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace has status "Ready":"True"
	I0916 12:55:07.613700  721192 pod_ready.go:82] duration metric: took 1m36.508420731s for pod "metrics-server-84c5f94fbc-pxpxt" in "kube-system" namespace to be "Ready" ...
	I0916 12:55:07.613714  721192 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-d47pr" in "kube-system" namespace to be "Ready" ...
	I0916 12:55:07.618001  721192 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-d47pr" in "kube-system" namespace has status "Ready":"True"
	I0916 12:55:07.618023  721192 pod_ready.go:82] duration metric: took 4.302206ms for pod "nvidia-device-plugin-daemonset-d47pr" in "kube-system" namespace to be "Ready" ...
	I0916 12:55:07.618040  721192 pod_ready.go:39] duration metric: took 1m38.115514474s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 12:55:07.618061  721192 api_server.go:52] waiting for apiserver process to appear ...
	I0916 12:55:07.618091  721192 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 12:55:07.618153  721192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 12:55:07.670099  721192 cri.go:89] found id: "19afa1d9a6f7f90b1f274e3335df49ff6839ea88e74957c8d373a86883d25505"
	I0916 12:55:07.670134  721192 cri.go:89] found id: ""
	I0916 12:55:07.670145  721192 logs.go:276] 1 containers: [19afa1d9a6f7f90b1f274e3335df49ff6839ea88e74957c8d373a86883d25505]
	I0916 12:55:07.670213  721192 ssh_runner.go:195] Run: which crictl
	I0916 12:55:07.674511  721192 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 12:55:07.674587  721192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 12:55:07.714433  721192 cri.go:89] found id: "7768e6a2948d05e9906c06b974f971f5ce43abdb36c168d6f9906b6c1e3e0d2f"
	I0916 12:55:07.714462  721192 cri.go:89] found id: ""
	I0916 12:55:07.714470  721192 logs.go:276] 1 containers: [7768e6a2948d05e9906c06b974f971f5ce43abdb36c168d6f9906b6c1e3e0d2f]
	I0916 12:55:07.714523  721192 ssh_runner.go:195] Run: which crictl
	I0916 12:55:07.719020  721192 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 12:55:07.719104  721192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 12:55:07.756557  721192 cri.go:89] found id: "483506cbd51599ae35d3b3547b346c42934a7b10043f7c93546cd61cad83d147"
	I0916 12:55:07.756581  721192 cri.go:89] found id: ""
	I0916 12:55:07.756592  721192 logs.go:276] 1 containers: [483506cbd51599ae35d3b3547b346c42934a7b10043f7c93546cd61cad83d147]
	I0916 12:55:07.756646  721192 ssh_runner.go:195] Run: which crictl
	I0916 12:55:07.760640  721192 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 12:55:07.760698  721192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 12:55:07.800200  721192 cri.go:89] found id: "db98593411139293ab947e5cb07f5fcd52a8bf36aa71448708d315be48dc9c9f"
	I0916 12:55:07.800224  721192 cri.go:89] found id: ""
	I0916 12:55:07.800231  721192 logs.go:276] 1 containers: [db98593411139293ab947e5cb07f5fcd52a8bf36aa71448708d315be48dc9c9f]
	I0916 12:55:07.800292  721192 ssh_runner.go:195] Run: which crictl
	I0916 12:55:07.804538  721192 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 12:55:07.804606  721192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 12:55:07.844258  721192 cri.go:89] found id: "05b18f4394581bc98b6044e869f3775bcf76f93c084bdd77b4ba6d848201f581"
	I0916 12:55:07.844291  721192 cri.go:89] found id: ""
	I0916 12:55:07.844298  721192 logs.go:276] 1 containers: [05b18f4394581bc98b6044e869f3775bcf76f93c084bdd77b4ba6d848201f581]
	I0916 12:55:07.844357  721192 ssh_runner.go:195] Run: which crictl
	I0916 12:55:07.848817  721192 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 12:55:07.848891  721192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 12:55:07.898206  721192 cri.go:89] found id: "380ca24e45ff9dcce918f4b1824c78723221c890c97fc6d21a01c8167605ffae"
	I0916 12:55:07.898237  721192 cri.go:89] found id: ""
	I0916 12:55:07.898248  721192 logs.go:276] 1 containers: [380ca24e45ff9dcce918f4b1824c78723221c890c97fc6d21a01c8167605ffae]
	I0916 12:55:07.898322  721192 ssh_runner.go:195] Run: which crictl
	I0916 12:55:07.902716  721192 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 12:55:07.902776  721192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 12:55:07.952319  721192 cri.go:89] found id: ""
	I0916 12:55:07.952344  721192 logs.go:276] 0 containers: []
	W0916 12:55:07.952353  721192 logs.go:278] No container was found matching "kindnet"
	I0916 12:55:07.952363  721192 logs.go:123] Gathering logs for kube-scheduler [db98593411139293ab947e5cb07f5fcd52a8bf36aa71448708d315be48dc9c9f] ...
	I0916 12:55:07.952380  721192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db98593411139293ab947e5cb07f5fcd52a8bf36aa71448708d315be48dc9c9f"
	I0916 12:55:07.997059  721192 logs.go:123] Gathering logs for CRI-O ...
	I0916 12:55:07.997102  721192 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-linux-amd64 start -p addons-682228 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: signal: killed
--- FAIL: TestAddons/Setup (2400.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 node stop m02 -v=7 --alsologtostderr
E0916 13:41:31.184380  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
E0916 13:42:12.145852  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-190751 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.457966423s)

                                                
                                                
-- stdout --
	* Stopping node "ha-190751-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 13:41:24.547492  739069 out.go:345] Setting OutFile to fd 1 ...
	I0916 13:41:24.547738  739069 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:41:24.547747  739069 out.go:358] Setting ErrFile to fd 2...
	I0916 13:41:24.547752  739069 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:41:24.547915  739069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
	I0916 13:41:24.548153  739069 mustload.go:65] Loading cluster: ha-190751
	I0916 13:41:24.548527  739069 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:41:24.548547  739069 stop.go:39] StopHost: ha-190751-m02
	I0916 13:41:24.548871  739069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:41:24.548907  739069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:41:24.564367  739069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34667
	I0916 13:41:24.564884  739069 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:41:24.565546  739069 main.go:141] libmachine: Using API Version  1
	I0916 13:41:24.565571  739069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:41:24.565936  739069 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:41:24.568463  739069 out.go:177] * Stopping node "ha-190751-m02"  ...
	I0916 13:41:24.569861  739069 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0916 13:41:24.569899  739069 main.go:141] libmachine: (ha-190751-m02) Calling .DriverName
	I0916 13:41:24.570111  739069 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0916 13:41:24.570145  739069 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:41:24.573539  739069 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:41:24.574231  739069 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:41:24.574258  739069 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:41:24.574469  739069 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:41:24.574666  739069 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:41:24.574905  739069 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:41:24.575105  739069 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/id_rsa Username:docker}
	I0916 13:41:24.660728  739069 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0916 13:41:24.715791  739069 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0916 13:41:24.770432  739069 main.go:141] libmachine: Stopping "ha-190751-m02"...
	I0916 13:41:24.770472  739069 main.go:141] libmachine: (ha-190751-m02) Calling .GetState
	I0916 13:41:24.772007  739069 main.go:141] libmachine: (ha-190751-m02) Calling .Stop
	I0916 13:41:24.775328  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 0/120
	I0916 13:41:25.776584  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 1/120
	I0916 13:41:26.779152  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 2/120
	I0916 13:41:27.780807  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 3/120
	I0916 13:41:28.782086  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 4/120
	I0916 13:41:29.784355  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 5/120
	I0916 13:41:30.785701  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 6/120
	I0916 13:41:31.787309  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 7/120
	I0916 13:41:32.788523  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 8/120
	I0916 13:41:33.790402  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 9/120
	I0916 13:41:34.792638  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 10/120
	I0916 13:41:35.793891  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 11/120
	I0916 13:41:36.795435  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 12/120
	I0916 13:41:37.796633  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 13/120
	I0916 13:41:38.798374  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 14/120
	I0916 13:41:39.800252  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 15/120
	I0916 13:41:40.801446  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 16/120
	I0916 13:41:41.802812  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 17/120
	I0916 13:41:42.804074  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 18/120
	I0916 13:41:43.806262  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 19/120
	I0916 13:41:44.808638  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 20/120
	I0916 13:41:45.809982  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 21/120
	I0916 13:41:46.811438  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 22/120
	I0916 13:41:47.812807  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 23/120
	I0916 13:41:48.814235  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 24/120
	I0916 13:41:49.816207  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 25/120
	I0916 13:41:50.817833  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 26/120
	I0916 13:41:51.820056  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 27/120
	I0916 13:41:52.821442  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 28/120
	I0916 13:41:53.822735  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 29/120
	I0916 13:41:54.824682  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 30/120
	I0916 13:41:55.826434  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 31/120
	I0916 13:41:56.827794  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 32/120
	I0916 13:41:57.829162  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 33/120
	I0916 13:41:58.830469  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 34/120
	I0916 13:41:59.832275  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 35/120
	I0916 13:42:00.833736  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 36/120
	I0916 13:42:01.834992  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 37/120
	I0916 13:42:02.836472  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 38/120
	I0916 13:42:03.837855  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 39/120
	I0916 13:42:04.839940  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 40/120
	I0916 13:42:05.841083  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 41/120
	I0916 13:42:06.842529  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 42/120
	I0916 13:42:07.843880  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 43/120
	I0916 13:42:08.845269  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 44/120
	I0916 13:42:09.847001  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 45/120
	I0916 13:42:10.848273  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 46/120
	I0916 13:42:11.849492  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 47/120
	I0916 13:42:12.850657  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 48/120
	I0916 13:42:13.852674  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 49/120
	I0916 13:42:14.854795  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 50/120
	I0916 13:42:15.856005  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 51/120
	I0916 13:42:16.857376  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 52/120
	I0916 13:42:17.858715  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 53/120
	I0916 13:42:18.860907  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 54/120
	I0916 13:42:19.862772  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 55/120
	I0916 13:42:20.864865  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 56/120
	I0916 13:42:21.866181  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 57/120
	I0916 13:42:22.868320  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 58/120
	I0916 13:42:23.869551  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 59/120
	I0916 13:42:24.871214  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 60/120
	I0916 13:42:25.872495  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 61/120
	I0916 13:42:26.873757  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 62/120
	I0916 13:42:27.875197  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 63/120
	I0916 13:42:28.876595  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 64/120
	I0916 13:42:29.878421  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 65/120
	I0916 13:42:30.880652  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 66/120
	I0916 13:42:31.881847  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 67/120
	I0916 13:42:32.883982  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 68/120
	I0916 13:42:33.885126  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 69/120
	I0916 13:42:34.886969  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 70/120
	I0916 13:42:35.888636  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 71/120
	I0916 13:42:36.889797  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 72/120
	I0916 13:42:37.890959  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 73/120
	I0916 13:42:38.892099  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 74/120
	I0916 13:42:39.893913  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 75/120
	I0916 13:42:40.896214  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 76/120
	I0916 13:42:41.897463  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 77/120
	I0916 13:42:42.899003  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 78/120
	I0916 13:42:43.900383  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 79/120
	I0916 13:42:44.902100  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 80/120
	I0916 13:42:45.903405  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 81/120
	I0916 13:42:46.905700  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 82/120
	I0916 13:42:47.907110  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 83/120
	I0916 13:42:48.908414  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 84/120
	I0916 13:42:49.910364  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 85/120
	I0916 13:42:50.911574  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 86/120
	I0916 13:42:51.912811  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 87/120
	I0916 13:42:52.914291  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 88/120
	I0916 13:42:53.916138  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 89/120
	I0916 13:42:54.918186  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 90/120
	I0916 13:42:55.919361  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 91/120
	I0916 13:42:56.920508  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 92/120
	I0916 13:42:57.921791  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 93/120
	I0916 13:42:58.922914  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 94/120
	I0916 13:42:59.924639  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 95/120
	I0916 13:43:00.926010  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 96/120
	I0916 13:43:01.928073  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 97/120
	I0916 13:43:02.929483  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 98/120
	I0916 13:43:03.930668  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 99/120
	I0916 13:43:04.932489  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 100/120
	I0916 13:43:05.933831  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 101/120
	I0916 13:43:06.935088  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 102/120
	I0916 13:43:07.936460  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 103/120
	I0916 13:43:08.937643  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 104/120
	I0916 13:43:09.939530  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 105/120
	I0916 13:43:10.940794  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 106/120
	I0916 13:43:11.942735  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 107/120
	I0916 13:43:12.944151  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 108/120
	I0916 13:43:13.945269  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 109/120
	I0916 13:43:14.947400  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 110/120
	I0916 13:43:15.948695  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 111/120
	I0916 13:43:16.950166  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 112/120
	I0916 13:43:17.952319  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 113/120
	I0916 13:43:18.953779  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 114/120
	I0916 13:43:19.955591  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 115/120
	I0916 13:43:20.956974  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 116/120
	I0916 13:43:21.958303  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 117/120
	I0916 13:43:22.959494  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 118/120
	I0916 13:43:23.960846  739069 main.go:141] libmachine: (ha-190751-m02) Waiting for machine to stop 119/120
	I0916 13:43:24.962374  739069 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0916 13:43:24.962543  739069 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-190751 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 status -v=7 --alsologtostderr
E0916 13:43:34.067327  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-190751 status -v=7 --alsologtostderr: exit status 3 (19.182187898s)

                                                
                                                
-- stdout --
	ha-190751
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-190751-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-190751-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-190751-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 13:43:25.006908  739529 out.go:345] Setting OutFile to fd 1 ...
	I0916 13:43:25.007212  739529 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:43:25.007226  739529 out.go:358] Setting ErrFile to fd 2...
	I0916 13:43:25.007232  739529 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:43:25.007516  739529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
	I0916 13:43:25.007792  739529 out.go:352] Setting JSON to false
	I0916 13:43:25.007842  739529 mustload.go:65] Loading cluster: ha-190751
	I0916 13:43:25.007942  739529 notify.go:220] Checking for updates...
	I0916 13:43:25.008437  739529 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:43:25.008460  739529 status.go:255] checking status of ha-190751 ...
	I0916 13:43:25.009080  739529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:25.009121  739529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:25.025992  739529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40869
	I0916 13:43:25.026649  739529 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:25.027380  739529 main.go:141] libmachine: Using API Version  1
	I0916 13:43:25.027404  739529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:25.027873  739529 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:25.028068  739529 main.go:141] libmachine: (ha-190751) Calling .GetState
	I0916 13:43:25.029782  739529 status.go:330] ha-190751 host status = "Running" (err=<nil>)
	I0916 13:43:25.029800  739529 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:43:25.030110  739529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:25.030161  739529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:25.045306  739529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36331
	I0916 13:43:25.045720  739529 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:25.046152  739529 main.go:141] libmachine: Using API Version  1
	I0916 13:43:25.046174  739529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:25.046531  739529 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:25.046728  739529 main.go:141] libmachine: (ha-190751) Calling .GetIP
	I0916 13:43:25.050012  739529 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:43:25.050439  739529 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:43:25.050461  739529 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:43:25.050619  739529 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:43:25.050906  739529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:25.050951  739529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:25.065551  739529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34321
	I0916 13:43:25.066149  739529 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:25.066622  739529 main.go:141] libmachine: Using API Version  1
	I0916 13:43:25.066648  739529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:25.066952  739529 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:25.067153  739529 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:43:25.067369  739529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:43:25.067411  739529 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:43:25.070127  739529 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:43:25.070604  739529 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:43:25.070645  739529 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:43:25.070744  739529 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:43:25.070926  739529 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:43:25.071185  739529 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:43:25.071387  739529 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:43:25.160443  739529 ssh_runner.go:195] Run: systemctl --version
	I0916 13:43:25.166935  739529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:43:25.184040  739529 kubeconfig.go:125] found "ha-190751" server: "https://192.168.39.254:8443"
	I0916 13:43:25.184093  739529 api_server.go:166] Checking apiserver status ...
	I0916 13:43:25.184133  739529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 13:43:25.201250  739529 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup
	W0916 13:43:25.212990  739529 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 13:43:25.213044  739529 ssh_runner.go:195] Run: ls
	I0916 13:43:25.218811  739529 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 13:43:25.223593  739529 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 13:43:25.223623  739529 status.go:422] ha-190751 apiserver status = Running (err=<nil>)
	I0916 13:43:25.223637  739529 status.go:257] ha-190751 status: &{Name:ha-190751 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 13:43:25.223682  739529 status.go:255] checking status of ha-190751-m02 ...
	I0916 13:43:25.224049  739529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:25.224098  739529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:25.239444  739529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46793
	I0916 13:43:25.239863  739529 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:25.240345  739529 main.go:141] libmachine: Using API Version  1
	I0916 13:43:25.240378  739529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:25.240710  739529 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:25.240923  739529 main.go:141] libmachine: (ha-190751-m02) Calling .GetState
	I0916 13:43:25.242600  739529 status.go:330] ha-190751-m02 host status = "Running" (err=<nil>)
	I0916 13:43:25.242620  739529 host.go:66] Checking if "ha-190751-m02" exists ...
	I0916 13:43:25.243058  739529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:25.243106  739529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:25.261219  739529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39079
	I0916 13:43:25.261604  739529 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:25.262062  739529 main.go:141] libmachine: Using API Version  1
	I0916 13:43:25.262083  739529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:25.262406  739529 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:25.262567  739529 main.go:141] libmachine: (ha-190751-m02) Calling .GetIP
	I0916 13:43:25.265105  739529 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:43:25.265530  739529 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:43:25.265556  739529 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:43:25.265696  739529 host.go:66] Checking if "ha-190751-m02" exists ...
	I0916 13:43:25.265990  739529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:25.266032  739529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:25.281621  739529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38165
	I0916 13:43:25.282016  739529 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:25.282498  739529 main.go:141] libmachine: Using API Version  1
	I0916 13:43:25.282516  739529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:25.282851  739529 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:25.283017  739529 main.go:141] libmachine: (ha-190751-m02) Calling .DriverName
	I0916 13:43:25.283192  739529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:43:25.283211  739529 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:43:25.285793  739529 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:43:25.286221  739529 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:43:25.286249  739529 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:43:25.286368  739529 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:43:25.286536  739529 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:43:25.286683  739529 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:43:25.286844  739529 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/id_rsa Username:docker}
	W0916 13:43:43.781967  739529 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.192:22: connect: no route to host
	W0916 13:43:43.782081  739529 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.192:22: connect: no route to host
	E0916 13:43:43.782107  739529 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.192:22: connect: no route to host
	I0916 13:43:43.782120  739529 status.go:257] ha-190751-m02 status: &{Name:ha-190751-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 13:43:43.782152  739529 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.192:22: connect: no route to host
	I0916 13:43:43.782159  739529 status.go:255] checking status of ha-190751-m03 ...
	I0916 13:43:43.782602  739529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:43.782664  739529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:43.797929  739529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I0916 13:43:43.798423  739529 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:43.798897  739529 main.go:141] libmachine: Using API Version  1
	I0916 13:43:43.798921  739529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:43.799243  739529 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:43.799414  739529 main.go:141] libmachine: (ha-190751-m03) Calling .GetState
	I0916 13:43:43.800936  739529 status.go:330] ha-190751-m03 host status = "Running" (err=<nil>)
	I0916 13:43:43.800951  739529 host.go:66] Checking if "ha-190751-m03" exists ...
	I0916 13:43:43.801299  739529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:43.801349  739529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:43.815623  739529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32833
	I0916 13:43:43.816055  739529 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:43.816510  739529 main.go:141] libmachine: Using API Version  1
	I0916 13:43:43.816536  739529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:43.816844  739529 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:43.817016  739529 main.go:141] libmachine: (ha-190751-m03) Calling .GetIP
	I0916 13:43:43.819799  739529 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:43:43.820304  739529 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:43:43.820337  739529 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:43:43.820468  739529 host.go:66] Checking if "ha-190751-m03" exists ...
	I0916 13:43:43.820857  739529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:43.820892  739529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:43.835120  739529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38125
	I0916 13:43:43.835535  739529 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:43.835967  739529 main.go:141] libmachine: Using API Version  1
	I0916 13:43:43.835986  739529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:43.836305  739529 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:43.836481  739529 main.go:141] libmachine: (ha-190751-m03) Calling .DriverName
	I0916 13:43:43.836646  739529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:43:43.836665  739529 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:43:43.839322  739529 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:43:43.839721  739529 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:43:43.839748  739529 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:43:43.839871  739529 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:43:43.840044  739529 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:43:43.840189  739529 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:43:43.840344  739529 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/id_rsa Username:docker}
	I0916 13:43:43.922523  739529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:43:43.944286  739529 kubeconfig.go:125] found "ha-190751" server: "https://192.168.39.254:8443"
	I0916 13:43:43.944320  739529 api_server.go:166] Checking apiserver status ...
	I0916 13:43:43.944359  739529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 13:43:43.962102  739529 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1390/cgroup
	W0916 13:43:43.973321  739529 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1390/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 13:43:43.973364  739529 ssh_runner.go:195] Run: ls
	I0916 13:43:43.977574  739529 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 13:43:43.981977  739529 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 13:43:43.981998  739529 status.go:422] ha-190751-m03 apiserver status = Running (err=<nil>)
	I0916 13:43:43.982008  739529 status.go:257] ha-190751-m03 status: &{Name:ha-190751-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 13:43:43.982028  739529 status.go:255] checking status of ha-190751-m04 ...
	I0916 13:43:43.982326  739529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:43.982374  739529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:43.997261  739529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42843
	I0916 13:43:43.997735  739529 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:43.998244  739529 main.go:141] libmachine: Using API Version  1
	I0916 13:43:43.998266  739529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:43.998602  739529 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:43.998789  739529 main.go:141] libmachine: (ha-190751-m04) Calling .GetState
	I0916 13:43:44.000234  739529 status.go:330] ha-190751-m04 host status = "Running" (err=<nil>)
	I0916 13:43:44.000250  739529 host.go:66] Checking if "ha-190751-m04" exists ...
	I0916 13:43:44.000518  739529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:44.000571  739529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:44.014744  739529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42789
	I0916 13:43:44.015197  739529 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:44.015660  739529 main.go:141] libmachine: Using API Version  1
	I0916 13:43:44.015680  739529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:44.015990  739529 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:44.016170  739529 main.go:141] libmachine: (ha-190751-m04) Calling .GetIP
	I0916 13:43:44.019017  739529 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:43:44.019495  739529 main.go:141] libmachine: (ha-190751-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c5:44", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:40:29 +0000 UTC Type:0 Mac:52:54:00:46:c5:44 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-190751-m04 Clientid:01:52:54:00:46:c5:44}
	I0916 13:43:44.019521  739529 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:43:44.019722  739529 host.go:66] Checking if "ha-190751-m04" exists ...
	I0916 13:43:44.020001  739529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:44.020035  739529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:44.034846  739529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46743
	I0916 13:43:44.035249  739529 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:44.035753  739529 main.go:141] libmachine: Using API Version  1
	I0916 13:43:44.035772  739529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:44.036107  739529 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:44.036298  739529 main.go:141] libmachine: (ha-190751-m04) Calling .DriverName
	I0916 13:43:44.036486  739529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:43:44.036506  739529 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHHostname
	I0916 13:43:44.038979  739529 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:43:44.039377  739529 main.go:141] libmachine: (ha-190751-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c5:44", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:40:29 +0000 UTC Type:0 Mac:52:54:00:46:c5:44 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-190751-m04 Clientid:01:52:54:00:46:c5:44}
	I0916 13:43:44.039401  739529 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:43:44.039523  739529 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHPort
	I0916 13:43:44.039666  739529 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHKeyPath
	I0916 13:43:44.039775  739529 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHUsername
	I0916 13:43:44.039891  739529 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m04/id_rsa Username:docker}
	I0916 13:43:44.126390  739529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:43:44.143855  739529 status.go:257] ha-190751-m04 status: &{Name:ha-190751-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-190751 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-190751 -n ha-190751
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-190751 logs -n 25: (1.350726439s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-190751 cp ha-190751-m03:/home/docker/cp-test.txt                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3557247571/001/cp-test_ha-190751-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-190751 cp ha-190751-m03:/home/docker/cp-test.txt                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751:/home/docker/cp-test_ha-190751-m03_ha-190751.txt                       |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n ha-190751 sudo cat                                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | /home/docker/cp-test_ha-190751-m03_ha-190751.txt                                 |           |         |         |                     |                     |
	| cp      | ha-190751 cp ha-190751-m03:/home/docker/cp-test.txt                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m02:/home/docker/cp-test_ha-190751-m03_ha-190751-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n ha-190751-m02 sudo cat                                          | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | /home/docker/cp-test_ha-190751-m03_ha-190751-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-190751 cp ha-190751-m03:/home/docker/cp-test.txt                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04:/home/docker/cp-test_ha-190751-m03_ha-190751-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n ha-190751-m04 sudo cat                                          | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | /home/docker/cp-test_ha-190751-m03_ha-190751-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-190751 cp testdata/cp-test.txt                                                | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-190751 cp ha-190751-m04:/home/docker/cp-test.txt                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3557247571/001/cp-test_ha-190751-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-190751 cp ha-190751-m04:/home/docker/cp-test.txt                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751:/home/docker/cp-test_ha-190751-m04_ha-190751.txt                       |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n ha-190751 sudo cat                                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | /home/docker/cp-test_ha-190751-m04_ha-190751.txt                                 |           |         |         |                     |                     |
	| cp      | ha-190751 cp ha-190751-m04:/home/docker/cp-test.txt                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m02:/home/docker/cp-test_ha-190751-m04_ha-190751-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n ha-190751-m02 sudo cat                                          | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | /home/docker/cp-test_ha-190751-m04_ha-190751-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-190751 cp ha-190751-m04:/home/docker/cp-test.txt                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m03:/home/docker/cp-test_ha-190751-m04_ha-190751-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n ha-190751-m03 sudo cat                                          | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | /home/docker/cp-test_ha-190751-m04_ha-190751-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-190751 node stop m02 -v=7                                                     | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 13:36:56
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 13:36:56.678517  735111 out.go:345] Setting OutFile to fd 1 ...
	I0916 13:36:56.678787  735111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:36:56.678797  735111 out.go:358] Setting ErrFile to fd 2...
	I0916 13:36:56.678801  735111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:36:56.679003  735111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
	I0916 13:36:56.679607  735111 out.go:352] Setting JSON to false
	I0916 13:36:56.680520  735111 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":11966,"bootTime":1726481851,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 13:36:56.680631  735111 start.go:139] virtualization: kvm guest
	I0916 13:36:56.682617  735111 out.go:177] * [ha-190751] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 13:36:56.683792  735111 out.go:177]   - MINIKUBE_LOCATION=19652
	I0916 13:36:56.683791  735111 notify.go:220] Checking for updates...
	I0916 13:36:56.685057  735111 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 13:36:56.686202  735111 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19652-713072/kubeconfig
	I0916 13:36:56.687271  735111 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 13:36:56.688199  735111 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 13:36:56.689143  735111 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 13:36:56.690257  735111 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 13:36:56.723912  735111 out.go:177] * Using the kvm2 driver based on user configuration
	I0916 13:36:56.725038  735111 start.go:297] selected driver: kvm2
	I0916 13:36:56.725048  735111 start.go:901] validating driver "kvm2" against <nil>
	I0916 13:36:56.725058  735111 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 13:36:56.725720  735111 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 13:36:56.725788  735111 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19652-713072/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 13:36:56.739803  735111 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 13:36:56.739851  735111 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 13:36:56.740082  735111 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 13:36:56.740112  735111 cni.go:84] Creating CNI manager for ""
	I0916 13:36:56.740151  735111 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 13:36:56.740158  735111 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 13:36:56.740208  735111 start.go:340] cluster config:
	{Name:ha-190751 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-190751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0916 13:36:56.740299  735111 iso.go:125] acquiring lock: {Name:mk66d96ffbd424a8ca76a8604dfbe200d58305de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 13:36:56.741805  735111 out.go:177] * Starting "ha-190751" primary control-plane node in "ha-190751" cluster
	I0916 13:36:56.742781  735111 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 13:36:56.742820  735111 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 13:36:56.742829  735111 cache.go:56] Caching tarball of preloaded images
	I0916 13:36:56.742896  735111 preload.go:172] Found /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 13:36:56.742905  735111 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 13:36:56.743197  735111 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/config.json ...
	I0916 13:36:56.743218  735111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/config.json: {Name:mk79170c9af09964bad9fa686bda7acb0bb551ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:36:56.743344  735111 start.go:360] acquireMachinesLock for ha-190751: {Name:mke8f8f8ba61009cdea7a3d88b50b9f6ae6e1362 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 13:36:56.743372  735111 start.go:364] duration metric: took 14.904µs to acquireMachinesLock for "ha-190751"
	I0916 13:36:56.743390  735111 start.go:93] Provisioning new machine with config: &{Name:ha-190751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-190751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 13:36:56.743443  735111 start.go:125] createHost starting for "" (driver="kvm2")
	I0916 13:36:56.744759  735111 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 13:36:56.744866  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:36:56.744897  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:36:56.758738  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43649
	I0916 13:36:56.759112  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:36:56.759587  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:36:56.759607  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:36:56.759901  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:36:56.760105  735111 main.go:141] libmachine: (ha-190751) Calling .GetMachineName
	I0916 13:36:56.760231  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:36:56.760374  735111 start.go:159] libmachine.API.Create for "ha-190751" (driver="kvm2")
	I0916 13:36:56.760406  735111 client.go:168] LocalClient.Create starting
	I0916 13:36:56.760439  735111 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem
	I0916 13:36:56.760479  735111 main.go:141] libmachine: Decoding PEM data...
	I0916 13:36:56.760496  735111 main.go:141] libmachine: Parsing certificate...
	I0916 13:36:56.760560  735111 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem
	I0916 13:36:56.760578  735111 main.go:141] libmachine: Decoding PEM data...
	I0916 13:36:56.760592  735111 main.go:141] libmachine: Parsing certificate...
	I0916 13:36:56.760612  735111 main.go:141] libmachine: Running pre-create checks...
	I0916 13:36:56.760620  735111 main.go:141] libmachine: (ha-190751) Calling .PreCreateCheck
	I0916 13:36:56.761019  735111 main.go:141] libmachine: (ha-190751) Calling .GetConfigRaw
	I0916 13:36:56.761357  735111 main.go:141] libmachine: Creating machine...
	I0916 13:36:56.761369  735111 main.go:141] libmachine: (ha-190751) Calling .Create
	I0916 13:36:56.761471  735111 main.go:141] libmachine: (ha-190751) Creating KVM machine...
	I0916 13:36:56.762874  735111 main.go:141] libmachine: (ha-190751) DBG | found existing default KVM network
	I0916 13:36:56.763511  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:36:56.763387  735134 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I0916 13:36:56.763544  735111 main.go:141] libmachine: (ha-190751) DBG | created network xml: 
	I0916 13:36:56.763557  735111 main.go:141] libmachine: (ha-190751) DBG | <network>
	I0916 13:36:56.763573  735111 main.go:141] libmachine: (ha-190751) DBG |   <name>mk-ha-190751</name>
	I0916 13:36:56.763580  735111 main.go:141] libmachine: (ha-190751) DBG |   <dns enable='no'/>
	I0916 13:36:56.763585  735111 main.go:141] libmachine: (ha-190751) DBG |   
	I0916 13:36:56.763592  735111 main.go:141] libmachine: (ha-190751) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0916 13:36:56.763597  735111 main.go:141] libmachine: (ha-190751) DBG |     <dhcp>
	I0916 13:36:56.763604  735111 main.go:141] libmachine: (ha-190751) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0916 13:36:56.763611  735111 main.go:141] libmachine: (ha-190751) DBG |     </dhcp>
	I0916 13:36:56.763621  735111 main.go:141] libmachine: (ha-190751) DBG |   </ip>
	I0916 13:36:56.763628  735111 main.go:141] libmachine: (ha-190751) DBG |   
	I0916 13:36:56.763641  735111 main.go:141] libmachine: (ha-190751) DBG | </network>
	I0916 13:36:56.763652  735111 main.go:141] libmachine: (ha-190751) DBG | 
	I0916 13:36:56.768237  735111 main.go:141] libmachine: (ha-190751) DBG | trying to create private KVM network mk-ha-190751 192.168.39.0/24...
	I0916 13:36:56.829521  735111 main.go:141] libmachine: (ha-190751) DBG | private KVM network mk-ha-190751 192.168.39.0/24 created
	I0916 13:36:56.829557  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:36:56.829473  735134 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 13:36:56.829572  735111 main.go:141] libmachine: (ha-190751) Setting up store path in /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751 ...
	I0916 13:36:56.829590  735111 main.go:141] libmachine: (ha-190751) Building disk image from file:///home/jenkins/minikube-integration/19652-713072/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 13:36:56.829615  735111 main.go:141] libmachine: (ha-190751) Downloading /home/jenkins/minikube-integration/19652-713072/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19652-713072/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 13:36:57.095789  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:36:57.095611  735134 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa...
	I0916 13:36:57.157560  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:36:57.157443  735134 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/ha-190751.rawdisk...
	I0916 13:36:57.157596  735111 main.go:141] libmachine: (ha-190751) DBG | Writing magic tar header
	I0916 13:36:57.157615  735111 main.go:141] libmachine: (ha-190751) DBG | Writing SSH key tar header
	I0916 13:36:57.157625  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:36:57.157549  735134 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751 ...
	I0916 13:36:57.157641  735111 main.go:141] libmachine: (ha-190751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751
	I0916 13:36:57.157724  735111 main.go:141] libmachine: (ha-190751) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751 (perms=drwx------)
	I0916 13:36:57.157752  735111 main.go:141] libmachine: (ha-190751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072/.minikube/machines
	I0916 13:36:57.157764  735111 main.go:141] libmachine: (ha-190751) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072/.minikube/machines (perms=drwxr-xr-x)
	I0916 13:36:57.157777  735111 main.go:141] libmachine: (ha-190751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 13:36:57.157804  735111 main.go:141] libmachine: (ha-190751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072
	I0916 13:36:57.157815  735111 main.go:141] libmachine: (ha-190751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 13:36:57.157826  735111 main.go:141] libmachine: (ha-190751) DBG | Checking permissions on dir: /home/jenkins
	I0916 13:36:57.157836  735111 main.go:141] libmachine: (ha-190751) DBG | Checking permissions on dir: /home
	I0916 13:36:57.157847  735111 main.go:141] libmachine: (ha-190751) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072/.minikube (perms=drwxr-xr-x)
	I0916 13:36:57.157862  735111 main.go:141] libmachine: (ha-190751) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072 (perms=drwxrwxr-x)
	I0916 13:36:57.157875  735111 main.go:141] libmachine: (ha-190751) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 13:36:57.157888  735111 main.go:141] libmachine: (ha-190751) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 13:36:57.157898  735111 main.go:141] libmachine: (ha-190751) Creating domain...
	I0916 13:36:57.157916  735111 main.go:141] libmachine: (ha-190751) DBG | Skipping /home - not owner
	I0916 13:36:57.158843  735111 main.go:141] libmachine: (ha-190751) define libvirt domain using xml: 
	I0916 13:36:57.158858  735111 main.go:141] libmachine: (ha-190751) <domain type='kvm'>
	I0916 13:36:57.158864  735111 main.go:141] libmachine: (ha-190751)   <name>ha-190751</name>
	I0916 13:36:57.158869  735111 main.go:141] libmachine: (ha-190751)   <memory unit='MiB'>2200</memory>
	I0916 13:36:57.158874  735111 main.go:141] libmachine: (ha-190751)   <vcpu>2</vcpu>
	I0916 13:36:57.158877  735111 main.go:141] libmachine: (ha-190751)   <features>
	I0916 13:36:57.158882  735111 main.go:141] libmachine: (ha-190751)     <acpi/>
	I0916 13:36:57.158886  735111 main.go:141] libmachine: (ha-190751)     <apic/>
	I0916 13:36:57.158890  735111 main.go:141] libmachine: (ha-190751)     <pae/>
	I0916 13:36:57.158901  735111 main.go:141] libmachine: (ha-190751)     
	I0916 13:36:57.158911  735111 main.go:141] libmachine: (ha-190751)   </features>
	I0916 13:36:57.158918  735111 main.go:141] libmachine: (ha-190751)   <cpu mode='host-passthrough'>
	I0916 13:36:57.158928  735111 main.go:141] libmachine: (ha-190751)   
	I0916 13:36:57.158944  735111 main.go:141] libmachine: (ha-190751)   </cpu>
	I0916 13:36:57.158954  735111 main.go:141] libmachine: (ha-190751)   <os>
	I0916 13:36:57.158978  735111 main.go:141] libmachine: (ha-190751)     <type>hvm</type>
	I0916 13:36:57.158998  735111 main.go:141] libmachine: (ha-190751)     <boot dev='cdrom'/>
	I0916 13:36:57.159028  735111 main.go:141] libmachine: (ha-190751)     <boot dev='hd'/>
	I0916 13:36:57.159049  735111 main.go:141] libmachine: (ha-190751)     <bootmenu enable='no'/>
	I0916 13:36:57.159057  735111 main.go:141] libmachine: (ha-190751)   </os>
	I0916 13:36:57.159062  735111 main.go:141] libmachine: (ha-190751)   <devices>
	I0916 13:36:57.159071  735111 main.go:141] libmachine: (ha-190751)     <disk type='file' device='cdrom'>
	I0916 13:36:57.159077  735111 main.go:141] libmachine: (ha-190751)       <source file='/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/boot2docker.iso'/>
	I0916 13:36:57.159087  735111 main.go:141] libmachine: (ha-190751)       <target dev='hdc' bus='scsi'/>
	I0916 13:36:57.159097  735111 main.go:141] libmachine: (ha-190751)       <readonly/>
	I0916 13:36:57.159105  735111 main.go:141] libmachine: (ha-190751)     </disk>
	I0916 13:36:57.159115  735111 main.go:141] libmachine: (ha-190751)     <disk type='file' device='disk'>
	I0916 13:36:57.159136  735111 main.go:141] libmachine: (ha-190751)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 13:36:57.159151  735111 main.go:141] libmachine: (ha-190751)       <source file='/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/ha-190751.rawdisk'/>
	I0916 13:36:57.159159  735111 main.go:141] libmachine: (ha-190751)       <target dev='hda' bus='virtio'/>
	I0916 13:36:57.159163  735111 main.go:141] libmachine: (ha-190751)     </disk>
	I0916 13:36:57.159171  735111 main.go:141] libmachine: (ha-190751)     <interface type='network'>
	I0916 13:36:57.159182  735111 main.go:141] libmachine: (ha-190751)       <source network='mk-ha-190751'/>
	I0916 13:36:57.159194  735111 main.go:141] libmachine: (ha-190751)       <model type='virtio'/>
	I0916 13:36:57.159201  735111 main.go:141] libmachine: (ha-190751)     </interface>
	I0916 13:36:57.159212  735111 main.go:141] libmachine: (ha-190751)     <interface type='network'>
	I0916 13:36:57.159222  735111 main.go:141] libmachine: (ha-190751)       <source network='default'/>
	I0916 13:36:57.159230  735111 main.go:141] libmachine: (ha-190751)       <model type='virtio'/>
	I0916 13:36:57.159239  735111 main.go:141] libmachine: (ha-190751)     </interface>
	I0916 13:36:57.159252  735111 main.go:141] libmachine: (ha-190751)     <serial type='pty'>
	I0916 13:36:57.159261  735111 main.go:141] libmachine: (ha-190751)       <target port='0'/>
	I0916 13:36:57.159266  735111 main.go:141] libmachine: (ha-190751)     </serial>
	I0916 13:36:57.159273  735111 main.go:141] libmachine: (ha-190751)     <console type='pty'>
	I0916 13:36:57.159282  735111 main.go:141] libmachine: (ha-190751)       <target type='serial' port='0'/>
	I0916 13:36:57.159296  735111 main.go:141] libmachine: (ha-190751)     </console>
	I0916 13:36:57.159304  735111 main.go:141] libmachine: (ha-190751)     <rng model='virtio'>
	I0916 13:36:57.159312  735111 main.go:141] libmachine: (ha-190751)       <backend model='random'>/dev/random</backend>
	I0916 13:36:57.159322  735111 main.go:141] libmachine: (ha-190751)     </rng>
	I0916 13:36:57.159328  735111 main.go:141] libmachine: (ha-190751)     
	I0916 13:36:57.159338  735111 main.go:141] libmachine: (ha-190751)     
	I0916 13:36:57.159344  735111 main.go:141] libmachine: (ha-190751)   </devices>
	I0916 13:36:57.159358  735111 main.go:141] libmachine: (ha-190751) </domain>
	I0916 13:36:57.159369  735111 main.go:141] libmachine: (ha-190751) 
	I0916 13:36:57.163337  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:69:e2:cf in network default
	I0916 13:36:57.163907  735111 main.go:141] libmachine: (ha-190751) Ensuring networks are active...
	I0916 13:36:57.163927  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:36:57.164583  735111 main.go:141] libmachine: (ha-190751) Ensuring network default is active
	I0916 13:36:57.164908  735111 main.go:141] libmachine: (ha-190751) Ensuring network mk-ha-190751 is active
	I0916 13:36:57.165378  735111 main.go:141] libmachine: (ha-190751) Getting domain xml...
	I0916 13:36:57.166090  735111 main.go:141] libmachine: (ha-190751) Creating domain...
	I0916 13:36:58.333062  735111 main.go:141] libmachine: (ha-190751) Waiting to get IP...
	I0916 13:36:58.333963  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:36:58.334354  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:36:58.334424  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:36:58.334357  735134 retry.go:31] will retry after 279.525118ms: waiting for machine to come up
	I0916 13:36:58.615804  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:36:58.616232  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:36:58.616272  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:36:58.616184  735134 retry.go:31] will retry after 363.505809ms: waiting for machine to come up
	I0916 13:36:58.981741  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:36:58.982158  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:36:58.982188  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:36:58.982109  735134 retry.go:31] will retry after 369.018808ms: waiting for machine to come up
	I0916 13:36:59.352601  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:36:59.353031  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:36:59.353063  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:36:59.352967  735134 retry.go:31] will retry after 560.553294ms: waiting for machine to come up
	I0916 13:36:59.914639  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:36:59.915027  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:36:59.915059  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:36:59.914973  735134 retry.go:31] will retry after 665.558726ms: waiting for machine to come up
	I0916 13:37:00.581880  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:00.582306  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:37:00.582332  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:37:00.582263  735134 retry.go:31] will retry after 948.01504ms: waiting for machine to come up
	I0916 13:37:01.531610  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:01.532007  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:37:01.532040  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:37:01.531979  735134 retry.go:31] will retry after 736.553093ms: waiting for machine to come up
	I0916 13:37:02.270426  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:02.270790  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:37:02.270829  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:37:02.270735  735134 retry.go:31] will retry after 1.270424871s: waiting for machine to come up
	I0916 13:37:03.543093  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:03.543487  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:37:03.543508  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:37:03.543459  735134 retry.go:31] will retry after 1.59125153s: waiting for machine to come up
	I0916 13:37:05.136091  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:05.136429  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:37:05.136458  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:37:05.136382  735134 retry.go:31] will retry after 1.693626671s: waiting for machine to come up
	I0916 13:37:06.832020  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:06.832535  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:37:06.832564  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:37:06.832491  735134 retry.go:31] will retry after 1.948764787s: waiting for machine to come up
	I0916 13:37:08.783618  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:08.784008  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:37:08.784030  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:37:08.783966  735134 retry.go:31] will retry after 2.647820583s: waiting for machine to come up
	I0916 13:37:11.433054  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:11.433446  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:37:11.433474  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:37:11.433404  735134 retry.go:31] will retry after 3.505266082s: waiting for machine to come up
	I0916 13:37:14.942445  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:14.942834  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:37:14.942856  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:37:14.942793  735134 retry.go:31] will retry after 3.656594435s: waiting for machine to come up
	I0916 13:37:18.601473  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:18.601963  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has current primary IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:18.601994  735111 main.go:141] libmachine: (ha-190751) Found IP for machine: 192.168.39.94
	I0916 13:37:18.602008  735111 main.go:141] libmachine: (ha-190751) Reserving static IP address...
	I0916 13:37:18.602385  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find host DHCP lease matching {name: "ha-190751", mac: "52:54:00:c8:dd:8b", ip: "192.168.39.94"} in network mk-ha-190751
	I0916 13:37:18.672709  735111 main.go:141] libmachine: (ha-190751) Reserved static IP address: 192.168.39.94
	I0916 13:37:18.672734  735111 main.go:141] libmachine: (ha-190751) DBG | Getting to WaitForSSH function...
	I0916 13:37:18.672742  735111 main.go:141] libmachine: (ha-190751) Waiting for SSH to be available...
	I0916 13:37:18.675170  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:18.675604  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:18.675655  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:18.675818  735111 main.go:141] libmachine: (ha-190751) DBG | Using SSH client type: external
	I0916 13:37:18.675849  735111 main.go:141] libmachine: (ha-190751) DBG | Using SSH private key: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa (-rw-------)
	I0916 13:37:18.675884  735111 main.go:141] libmachine: (ha-190751) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.94 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 13:37:18.675899  735111 main.go:141] libmachine: (ha-190751) DBG | About to run SSH command:
	I0916 13:37:18.675935  735111 main.go:141] libmachine: (ha-190751) DBG | exit 0
	I0916 13:37:18.801655  735111 main.go:141] libmachine: (ha-190751) DBG | SSH cmd err, output: <nil>: 
	I0916 13:37:18.801941  735111 main.go:141] libmachine: (ha-190751) KVM machine creation complete!
	I0916 13:37:18.802283  735111 main.go:141] libmachine: (ha-190751) Calling .GetConfigRaw
	I0916 13:37:18.802859  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:37:18.803052  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:37:18.803228  735111 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 13:37:18.803245  735111 main.go:141] libmachine: (ha-190751) Calling .GetState
	I0916 13:37:18.804506  735111 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 13:37:18.804519  735111 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 13:37:18.804524  735111 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 13:37:18.804529  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:37:18.806823  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:18.807131  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:18.807155  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:18.807290  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:37:18.807448  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:18.807568  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:18.807667  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:37:18.807798  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:37:18.808046  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0916 13:37:18.808060  735111 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 13:37:18.916996  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 13:37:18.917018  735111 main.go:141] libmachine: Detecting the provisioner...
	I0916 13:37:18.917027  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:37:18.920186  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:18.920536  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:18.920568  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:18.920770  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:37:18.921013  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:18.921176  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:18.921327  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:37:18.921499  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:37:18.921739  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0916 13:37:18.921763  735111 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 13:37:19.030221  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 13:37:19.030307  735111 main.go:141] libmachine: found compatible host: buildroot
	I0916 13:37:19.030318  735111 main.go:141] libmachine: Provisioning with buildroot...
	I0916 13:37:19.030326  735111 main.go:141] libmachine: (ha-190751) Calling .GetMachineName
	I0916 13:37:19.030581  735111 buildroot.go:166] provisioning hostname "ha-190751"
	I0916 13:37:19.030614  735111 main.go:141] libmachine: (ha-190751) Calling .GetMachineName
	I0916 13:37:19.030818  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:37:19.033149  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.033497  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:19.033520  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.033659  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:37:19.033842  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:19.033992  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:19.034105  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:37:19.034240  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:37:19.034434  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0916 13:37:19.034448  735111 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-190751 && echo "ha-190751" | sudo tee /etc/hostname
	I0916 13:37:19.155215  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-190751
	
	I0916 13:37:19.155246  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:37:19.157702  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.158016  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:19.158045  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.158188  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:37:19.158387  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:19.158539  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:19.158685  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:37:19.158834  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:37:19.159057  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0916 13:37:19.159080  735111 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-190751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-190751/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-190751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 13:37:19.274380  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 13:37:19.274408  735111 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19652-713072/.minikube CaCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19652-713072/.minikube}
	I0916 13:37:19.274431  735111 buildroot.go:174] setting up certificates
	I0916 13:37:19.274442  735111 provision.go:84] configureAuth start
	I0916 13:37:19.274451  735111 main.go:141] libmachine: (ha-190751) Calling .GetMachineName
	I0916 13:37:19.274755  735111 main.go:141] libmachine: (ha-190751) Calling .GetIP
	I0916 13:37:19.277120  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.277480  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:19.277503  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.277636  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:37:19.279583  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.279832  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:19.279850  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.280031  735111 provision.go:143] copyHostCerts
	I0916 13:37:19.280058  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem
	I0916 13:37:19.280085  735111 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem, removing ...
	I0916 13:37:19.280095  735111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem
	I0916 13:37:19.280158  735111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem (1082 bytes)
	I0916 13:37:19.280230  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem
	I0916 13:37:19.280247  735111 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem, removing ...
	I0916 13:37:19.280253  735111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem
	I0916 13:37:19.280277  735111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem (1123 bytes)
	I0916 13:37:19.280315  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem
	I0916 13:37:19.280342  735111 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem, removing ...
	I0916 13:37:19.280354  735111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem
	I0916 13:37:19.280377  735111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem (1679 bytes)
	I0916 13:37:19.280421  735111 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem org=jenkins.ha-190751 san=[127.0.0.1 192.168.39.94 ha-190751 localhost minikube]
	I0916 13:37:19.358656  735111 provision.go:177] copyRemoteCerts
	I0916 13:37:19.358719  735111 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 13:37:19.358751  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:37:19.361346  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.361631  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:19.361660  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.361841  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:37:19.362025  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:19.362181  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:37:19.362298  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:37:19.447984  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 13:37:19.448069  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I0916 13:37:19.471720  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 13:37:19.471802  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 13:37:19.494723  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 13:37:19.494803  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0916 13:37:19.517505  735111 provision.go:87] duration metric: took 243.050824ms to configureAuth
	I0916 13:37:19.517532  735111 buildroot.go:189] setting minikube options for container-runtime
	I0916 13:37:19.517768  735111 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:37:19.517863  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:37:19.520489  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.520804  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:19.520836  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.520943  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:37:19.521124  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:19.521280  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:19.521380  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:37:19.521534  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:37:19.521732  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0916 13:37:19.521746  735111 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 13:37:19.747142  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 13:37:19.747168  735111 main.go:141] libmachine: Checking connection to Docker...
	I0916 13:37:19.747195  735111 main.go:141] libmachine: (ha-190751) Calling .GetURL
	I0916 13:37:19.748475  735111 main.go:141] libmachine: (ha-190751) DBG | Using libvirt version 6000000
	I0916 13:37:19.751506  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.751830  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:19.751854  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.752023  735111 main.go:141] libmachine: Docker is up and running!
	I0916 13:37:19.752039  735111 main.go:141] libmachine: Reticulating splines...
	I0916 13:37:19.752046  735111 client.go:171] duration metric: took 22.991630844s to LocalClient.Create
	I0916 13:37:19.752067  735111 start.go:167] duration metric: took 22.991694677s to libmachine.API.Create "ha-190751"
	I0916 13:37:19.752075  735111 start.go:293] postStartSetup for "ha-190751" (driver="kvm2")
	I0916 13:37:19.752084  735111 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 13:37:19.752101  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:37:19.752313  735111 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 13:37:19.752346  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:37:19.754590  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.754909  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:19.754934  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.755104  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:37:19.755250  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:19.755391  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:37:19.755530  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:37:19.840652  735111 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 13:37:19.844841  735111 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 13:37:19.844870  735111 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/addons for local assets ...
	I0916 13:37:19.844951  735111 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/files for local assets ...
	I0916 13:37:19.845056  735111 filesync.go:149] local asset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> 7205442.pem in /etc/ssl/certs
	I0916 13:37:19.845069  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> /etc/ssl/certs/7205442.pem
	I0916 13:37:19.845191  735111 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 13:37:19.855044  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 13:37:19.878510  735111 start.go:296] duration metric: took 126.418501ms for postStartSetup
	I0916 13:37:19.878588  735111 main.go:141] libmachine: (ha-190751) Calling .GetConfigRaw
	I0916 13:37:19.879237  735111 main.go:141] libmachine: (ha-190751) Calling .GetIP
	I0916 13:37:19.881802  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.882162  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:19.882191  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.882390  735111 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/config.json ...
	I0916 13:37:19.882564  735111 start.go:128] duration metric: took 23.139111441s to createHost
	I0916 13:37:19.882591  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:37:19.884751  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.885045  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:19.885083  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.885209  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:37:19.885393  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:19.885536  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:19.885701  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:37:19.885842  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:37:19.886010  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0916 13:37:19.886025  735111 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 13:37:19.994189  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726493839.969601699
	
	I0916 13:37:19.994215  735111 fix.go:216] guest clock: 1726493839.969601699
	I0916 13:37:19.994225  735111 fix.go:229] Guest: 2024-09-16 13:37:19.969601699 +0000 UTC Remote: 2024-09-16 13:37:19.882580313 +0000 UTC m=+23.238484318 (delta=87.021386ms)
	I0916 13:37:19.994252  735111 fix.go:200] guest clock delta is within tolerance: 87.021386ms
	I0916 13:37:19.994259  735111 start.go:83] releasing machines lock for "ha-190751", held for 23.25087569s
	I0916 13:37:19.994283  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:37:19.994538  735111 main.go:141] libmachine: (ha-190751) Calling .GetIP
	I0916 13:37:19.997323  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.997698  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:19.997724  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.997857  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:37:19.998381  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:37:19.998573  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:37:19.998692  735111 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 13:37:19.998736  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:37:19.998778  735111 ssh_runner.go:195] Run: cat /version.json
	I0916 13:37:19.998802  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:37:20.001458  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:20.001533  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:20.001871  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:20.001904  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:20.001925  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:20.001944  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:20.002037  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:37:20.002189  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:37:20.002204  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:20.002342  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:20.002375  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:37:20.002471  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:37:20.002463  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:37:20.002616  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:37:20.101835  735111 ssh_runner.go:195] Run: systemctl --version
	I0916 13:37:20.107791  735111 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 13:37:20.265880  735111 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 13:37:20.271930  735111 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 13:37:20.271994  735111 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 13:37:20.288455  735111 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 13:37:20.288478  735111 start.go:495] detecting cgroup driver to use...
	I0916 13:37:20.288548  735111 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 13:37:20.304990  735111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 13:37:20.318846  735111 docker.go:217] disabling cri-docker service (if available) ...
	I0916 13:37:20.318900  735111 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 13:37:20.332278  735111 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 13:37:20.345609  735111 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 13:37:20.461469  735111 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 13:37:20.606006  735111 docker.go:233] disabling docker service ...
	I0916 13:37:20.606088  735111 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 13:37:20.619614  735111 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 13:37:20.632364  735111 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 13:37:20.758642  735111 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 13:37:20.874000  735111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 13:37:20.887215  735111 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 13:37:20.904742  735111 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 13:37:20.904812  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:37:20.914408  735111 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 13:37:20.914475  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:37:20.923964  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:37:20.933297  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:37:20.942868  735111 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 13:37:20.952532  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:37:20.962048  735111 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:37:20.977737  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:37:20.987167  735111 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 13:37:20.995832  735111 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 13:37:20.995898  735111 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 13:37:21.009048  735111 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 13:37:21.018792  735111 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 13:37:21.130298  735111 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 13:37:21.220343  735111 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 13:37:21.220470  735111 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 13:37:21.225075  735111 start.go:563] Will wait 60s for crictl version
	I0916 13:37:21.225120  735111 ssh_runner.go:195] Run: which crictl
	I0916 13:37:21.228937  735111 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 13:37:21.267510  735111 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 13:37:21.267586  735111 ssh_runner.go:195] Run: crio --version
	I0916 13:37:21.295850  735111 ssh_runner.go:195] Run: crio --version
	I0916 13:37:21.323753  735111 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 13:37:21.324919  735111 main.go:141] libmachine: (ha-190751) Calling .GetIP
	I0916 13:37:21.327486  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:21.327801  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:21.327845  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:21.328020  735111 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 13:37:21.331975  735111 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 13:37:21.344361  735111 kubeadm.go:883] updating cluster {Name:ha-190751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-190751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 13:37:21.344463  735111 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 13:37:21.344510  735111 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 13:37:21.375985  735111 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0916 13:37:21.376057  735111 ssh_runner.go:195] Run: which lz4
	I0916 13:37:21.379835  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0916 13:37:21.379944  735111 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 13:37:21.383892  735111 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 13:37:21.383923  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0916 13:37:22.695033  735111 crio.go:462] duration metric: took 1.315122762s to copy over tarball
	I0916 13:37:22.695123  735111 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 13:37:24.632050  735111 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.936892624s)
	I0916 13:37:24.632087  735111 crio.go:469] duration metric: took 1.937024427s to extract the tarball
	I0916 13:37:24.632098  735111 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 13:37:24.667998  735111 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 13:37:24.710398  735111 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 13:37:24.710426  735111 cache_images.go:84] Images are preloaded, skipping loading
	I0916 13:37:24.710436  735111 kubeadm.go:934] updating node { 192.168.39.94 8443 v1.31.1 crio true true} ...
	I0916 13:37:24.710548  735111 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-190751 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-190751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 13:37:24.710628  735111 ssh_runner.go:195] Run: crio config
	I0916 13:37:24.758181  735111 cni.go:84] Creating CNI manager for ""
	I0916 13:37:24.758231  735111 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 13:37:24.758261  735111 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 13:37:24.758319  735111 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.94 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-190751 NodeName:ha-190751 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 13:37:24.758657  735111 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.94
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-190751"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 13:37:24.758926  735111 kube-vip.go:115] generating kube-vip config ...
	I0916 13:37:24.758973  735111 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 13:37:24.776756  735111 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 13:37:24.776868  735111 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0916 13:37:24.776928  735111 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 13:37:24.786665  735111 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 13:37:24.786733  735111 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 13:37:24.795903  735111 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0916 13:37:24.811114  735111 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 13:37:24.826580  735111 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0916 13:37:24.841958  735111 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0916 13:37:24.857386  735111 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 13:37:24.860966  735111 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 13:37:24.872483  735111 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 13:37:25.003846  735111 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 13:37:25.020742  735111 certs.go:68] Setting up /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751 for IP: 192.168.39.94
	I0916 13:37:25.020775  735111 certs.go:194] generating shared ca certs ...
	I0916 13:37:25.020796  735111 certs.go:226] acquiring lock for ca certs: {Name:mk25b35916ff3ff3777938e3e2b7794965f8a707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:37:25.021003  735111 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key
	I0916 13:37:25.021076  735111 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key
	I0916 13:37:25.021091  735111 certs.go:256] generating profile certs ...
	I0916 13:37:25.021155  735111 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.key
	I0916 13:37:25.021174  735111 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.crt with IP's: []
	I0916 13:37:25.079578  735111 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.crt ...
	I0916 13:37:25.079607  735111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.crt: {Name:mk140d1c2f4c990916187ba804583d1a9cf33684 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:37:25.079791  735111 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.key ...
	I0916 13:37:25.079810  735111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.key: {Name:mk5e962e9f96c994b7c25f532905372cf816e47b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:37:25.079905  735111 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.c2c0f481
	I0916 13:37:25.079919  735111 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.c2c0f481 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.94 192.168.39.254]
	I0916 13:37:25.235476  735111 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.c2c0f481 ...
	I0916 13:37:25.235509  735111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.c2c0f481: {Name:mk417c790613e4e78adbdd4499ae6a9c00dc3e15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:37:25.235708  735111 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.c2c0f481 ...
	I0916 13:37:25.235727  735111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.c2c0f481: {Name:mkfbbc964df63ee80e08357dfbaf68844994ce1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:37:25.235825  735111 certs.go:381] copying /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.c2c0f481 -> /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt
	I0916 13:37:25.235950  735111 certs.go:385] copying /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.c2c0f481 -> /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key
	I0916 13:37:25.236037  735111 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key
	I0916 13:37:25.236058  735111 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.crt with IP's: []
	I0916 13:37:25.593211  735111 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.crt ...
	I0916 13:37:25.593242  735111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.crt: {Name:mkd0a58170323377b51ec2422eecfc9ba233e69d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:37:25.593617  735111 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key ...
	I0916 13:37:25.593652  735111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key: {Name:mk697035e09a8239fdc475e00fc850425d13fa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:37:25.593818  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 13:37:25.593840  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 13:37:25.593854  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 13:37:25.593872  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 13:37:25.593974  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 13:37:25.594027  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 13:37:25.594051  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 13:37:25.594072  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 13:37:25.594157  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem (1338 bytes)
	W0916 13:37:25.594216  735111 certs.go:480] ignoring /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544_empty.pem, impossibly tiny 0 bytes
	I0916 13:37:25.594233  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 13:37:25.594276  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem (1082 bytes)
	I0916 13:37:25.594316  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem (1123 bytes)
	I0916 13:37:25.594356  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem (1679 bytes)
	I0916 13:37:25.594431  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 13:37:25.594475  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> /usr/share/ca-certificates/7205442.pem
	I0916 13:37:25.594500  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:37:25.594523  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem -> /usr/share/ca-certificates/720544.pem
	I0916 13:37:25.595141  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 13:37:25.620663  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 13:37:25.642887  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 13:37:25.665084  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 13:37:25.687133  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 13:37:25.709071  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 13:37:25.732657  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 13:37:25.755038  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 13:37:25.780332  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /usr/share/ca-certificates/7205442.pem (1708 bytes)
	I0916 13:37:25.808412  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 13:37:25.832402  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem --> /usr/share/ca-certificates/720544.pem (1338 bytes)
	I0916 13:37:25.858022  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 13:37:25.873892  735111 ssh_runner.go:195] Run: openssl version
	I0916 13:37:25.879546  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7205442.pem && ln -fs /usr/share/ca-certificates/7205442.pem /etc/ssl/certs/7205442.pem"
	I0916 13:37:25.890171  735111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7205442.pem
	I0916 13:37:25.894507  735111 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 13:33 /usr/share/ca-certificates/7205442.pem
	I0916 13:37:25.894561  735111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7205442.pem
	I0916 13:37:25.900313  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7205442.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 13:37:25.911168  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 13:37:25.921818  735111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:37:25.926147  735111 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 12:53 /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:37:25.926200  735111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:37:25.931623  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 13:37:25.942306  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/720544.pem && ln -fs /usr/share/ca-certificates/720544.pem /etc/ssl/certs/720544.pem"
	I0916 13:37:25.952913  735111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/720544.pem
	I0916 13:37:25.957227  735111 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 13:33 /usr/share/ca-certificates/720544.pem
	I0916 13:37:25.957296  735111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/720544.pem
	I0916 13:37:25.962718  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/720544.pem /etc/ssl/certs/51391683.0"
	I0916 13:37:25.972813  735111 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 13:37:25.976658  735111 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 13:37:25.976719  735111 kubeadm.go:392] StartCluster: {Name:ha-190751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-190751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 13:37:25.976832  735111 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 13:37:25.976891  735111 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 13:37:26.012234  735111 cri.go:89] found id: ""
	I0916 13:37:26.012309  735111 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 13:37:26.022128  735111 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 13:37:26.031471  735111 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 13:37:26.040533  735111 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 13:37:26.040551  735111 kubeadm.go:157] found existing configuration files:
	
	I0916 13:37:26.040587  735111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 13:37:26.049279  735111 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 13:37:26.049314  735111 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 13:37:26.058199  735111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 13:37:26.066645  735111 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 13:37:26.066701  735111 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 13:37:26.075640  735111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 13:37:26.084115  735111 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 13:37:26.084158  735111 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 13:37:26.093121  735111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 13:37:26.101594  735111 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 13:37:26.101649  735111 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 13:37:26.110723  735111 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 13:37:26.204835  735111 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 13:37:26.204894  735111 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 13:37:26.321862  735111 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 13:37:26.321980  735111 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 13:37:26.322110  735111 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 13:37:26.331078  735111 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 13:37:26.353738  735111 out.go:235]   - Generating certificates and keys ...
	I0916 13:37:26.353891  735111 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 13:37:26.354005  735111 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 13:37:26.395930  735111 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 13:37:26.499160  735111 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 13:37:26.632167  735111 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 13:37:26.833214  735111 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 13:37:27.181214  735111 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 13:37:27.181393  735111 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-190751 localhost] and IPs [192.168.39.94 127.0.0.1 ::1]
	I0916 13:37:27.371833  735111 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 13:37:27.372003  735111 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-190751 localhost] and IPs [192.168.39.94 127.0.0.1 ::1]
	I0916 13:37:27.585152  735111 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 13:37:27.810682  735111 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 13:37:28.082953  735111 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 13:37:28.083071  735111 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 13:37:28.258523  735111 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 13:37:28.367925  735111 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 13:37:28.814879  735111 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 13:37:28.932823  735111 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 13:37:29.004465  735111 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 13:37:29.004568  735111 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 13:37:29.007213  735111 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 13:37:29.009214  735111 out.go:235]   - Booting up control plane ...
	I0916 13:37:29.009358  735111 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 13:37:29.009473  735111 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 13:37:29.009582  735111 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 13:37:29.024463  735111 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 13:37:29.030729  735111 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 13:37:29.030801  735111 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 13:37:29.175858  735111 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 13:37:29.176023  735111 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 13:37:29.693416  735111 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 518.039092ms
	I0916 13:37:29.693512  735111 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 13:37:38.879766  735111 kubeadm.go:310] [api-check] The API server is healthy after 9.191119687s
	I0916 13:37:38.891993  735111 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 13:37:38.907636  735111 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 13:37:38.947498  735111 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 13:37:38.947721  735111 kubeadm.go:310] [mark-control-plane] Marking the node ha-190751 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 13:37:38.967620  735111 kubeadm.go:310] [bootstrap-token] Using token: 19lgif.tvhngrrmbtbid3dy
	I0916 13:37:38.968812  735111 out.go:235]   - Configuring RBAC rules ...
	I0916 13:37:38.968935  735111 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 13:37:38.976592  735111 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 13:37:38.989031  735111 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 13:37:38.993154  735111 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 13:37:38.996206  735111 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 13:37:38.999675  735111 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 13:37:39.287523  735111 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 13:37:39.712013  735111 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 13:37:40.285980  735111 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 13:37:40.286947  735111 kubeadm.go:310] 
	I0916 13:37:40.287033  735111 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 13:37:40.287043  735111 kubeadm.go:310] 
	I0916 13:37:40.287161  735111 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 13:37:40.287172  735111 kubeadm.go:310] 
	I0916 13:37:40.287208  735111 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 13:37:40.287304  735111 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 13:37:40.287382  735111 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 13:37:40.287399  735111 kubeadm.go:310] 
	I0916 13:37:40.287443  735111 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 13:37:40.287449  735111 kubeadm.go:310] 
	I0916 13:37:40.287490  735111 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 13:37:40.287499  735111 kubeadm.go:310] 
	I0916 13:37:40.287563  735111 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 13:37:40.287651  735111 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 13:37:40.287711  735111 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 13:37:40.287717  735111 kubeadm.go:310] 
	I0916 13:37:40.287812  735111 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 13:37:40.287900  735111 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 13:37:40.287908  735111 kubeadm.go:310] 
	I0916 13:37:40.287998  735111 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 19lgif.tvhngrrmbtbid3dy \
	I0916 13:37:40.288167  735111 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:40463d1766828cd98d0b3d82eb62b65ad46ddd558da2fd9e3536672d6eade3c0 \
	I0916 13:37:40.288200  735111 kubeadm.go:310] 	--control-plane 
	I0916 13:37:40.288208  735111 kubeadm.go:310] 
	I0916 13:37:40.288337  735111 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 13:37:40.288347  735111 kubeadm.go:310] 
	I0916 13:37:40.288460  735111 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 19lgif.tvhngrrmbtbid3dy \
	I0916 13:37:40.288620  735111 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:40463d1766828cd98d0b3d82eb62b65ad46ddd558da2fd9e3536672d6eade3c0 
	I0916 13:37:40.289640  735111 kubeadm.go:310] W0916 13:37:26.183865     838 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 13:37:40.290030  735111 kubeadm.go:310] W0916 13:37:26.184708     838 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 13:37:40.290189  735111 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 13:37:40.290209  735111 cni.go:84] Creating CNI manager for ""
	I0916 13:37:40.290218  735111 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 13:37:40.291806  735111 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 13:37:40.292983  735111 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 13:37:40.298536  735111 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 13:37:40.298559  735111 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 13:37:40.319523  735111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 13:37:40.756144  735111 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 13:37:40.756253  735111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 13:37:40.756269  735111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-190751 minikube.k8s.io/updated_at=2024_09_16T13_37_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86 minikube.k8s.io/name=ha-190751 minikube.k8s.io/primary=true
	I0916 13:37:40.783430  735111 ops.go:34] apiserver oom_adj: -16
	I0916 13:37:40.911968  735111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 13:37:41.412359  735111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 13:37:41.553914  735111 kubeadm.go:1113] duration metric: took 797.744629ms to wait for elevateKubeSystemPrivileges
	I0916 13:37:41.553952  735111 kubeadm.go:394] duration metric: took 15.577239114s to StartCluster
	I0916 13:37:41.553973  735111 settings.go:142] acquiring lock: {Name:mka9d51f09298db6ba9006267d9a91b0a28fad59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:37:41.554044  735111 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19652-713072/kubeconfig
	I0916 13:37:41.554728  735111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/kubeconfig: {Name:mk84449075783d20927a7d708361081f8c4a2b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:37:41.554924  735111 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 13:37:41.554947  735111 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 13:37:41.554954  735111 start.go:241] waiting for startup goroutines ...
	I0916 13:37:41.554967  735111 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 13:37:41.555096  735111 addons.go:69] Setting storage-provisioner=true in profile "ha-190751"
	I0916 13:37:41.555117  735111 addons.go:234] Setting addon storage-provisioner=true in "ha-190751"
	I0916 13:37:41.555132  735111 addons.go:69] Setting default-storageclass=true in profile "ha-190751"
	I0916 13:37:41.555175  735111 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:37:41.555179  735111 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-190751"
	I0916 13:37:41.555152  735111 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:37:41.555715  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:37:41.555751  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:37:41.555720  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:37:41.555855  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:37:41.570781  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42531
	I0916 13:37:41.570887  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38727
	I0916 13:37:41.571275  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:37:41.571416  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:37:41.571837  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:37:41.571860  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:37:41.571954  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:37:41.571978  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:37:41.572205  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:37:41.572388  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:37:41.572391  735111 main.go:141] libmachine: (ha-190751) Calling .GetState
	I0916 13:37:41.573010  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:37:41.573062  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:37:41.574573  735111 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19652-713072/kubeconfig
	I0916 13:37:41.574954  735111 kapi.go:59] client config for ha-190751: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.crt", KeyFile:"/home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.key", CAFile:"/home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 13:37:41.575505  735111 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 13:37:41.575908  735111 addons.go:234] Setting addon default-storageclass=true in "ha-190751"
	I0916 13:37:41.575955  735111 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:37:41.576336  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:37:41.576383  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:37:41.588809  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39139
	I0916 13:37:41.589322  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:37:41.589856  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:37:41.589876  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:37:41.590208  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:37:41.590405  735111 main.go:141] libmachine: (ha-190751) Calling .GetState
	I0916 13:37:41.592142  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:37:41.594636  735111 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 13:37:41.595564  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
	I0916 13:37:41.596015  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:37:41.596154  735111 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 13:37:41.596174  735111 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 13:37:41.596195  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:37:41.596505  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:37:41.596526  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:37:41.596882  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:37:41.597396  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:37:41.597437  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:37:41.599683  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:41.600156  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:41.600231  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:41.600475  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:37:41.600656  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:41.600821  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:37:41.600952  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:37:41.613005  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42843
	I0916 13:37:41.613496  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:37:41.614052  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:37:41.614078  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:37:41.614432  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:37:41.614650  735111 main.go:141] libmachine: (ha-190751) Calling .GetState
	I0916 13:37:41.616031  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:37:41.616275  735111 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 13:37:41.616293  735111 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 13:37:41.616314  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:37:41.619255  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:41.619735  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:41.619759  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:41.619892  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:37:41.619996  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:41.620134  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:37:41.620230  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:37:41.731350  735111 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 13:37:41.743817  735111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 13:37:41.831801  735111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 13:37:42.102488  735111 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0916 13:37:42.297855  735111 main.go:141] libmachine: Making call to close driver server
	I0916 13:37:42.297879  735111 main.go:141] libmachine: (ha-190751) Calling .Close
	I0916 13:37:42.298186  735111 main.go:141] libmachine: Successfully made call to close driver server
	I0916 13:37:42.298208  735111 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 13:37:42.298217  735111 main.go:141] libmachine: Making call to close driver server
	I0916 13:37:42.298225  735111 main.go:141] libmachine: (ha-190751) Calling .Close
	I0916 13:37:42.298266  735111 main.go:141] libmachine: Making call to close driver server
	I0916 13:37:42.298295  735111 main.go:141] libmachine: (ha-190751) Calling .Close
	I0916 13:37:42.298496  735111 main.go:141] libmachine: Successfully made call to close driver server
	I0916 13:37:42.298514  735111 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 13:37:42.298587  735111 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 13:37:42.298606  735111 main.go:141] libmachine: Successfully made call to close driver server
	I0916 13:37:42.298617  735111 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 13:37:42.298621  735111 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 13:37:42.298642  735111 main.go:141] libmachine: Making call to close driver server
	I0916 13:37:42.298679  735111 main.go:141] libmachine: (ha-190751) Calling .Close
	I0916 13:37:42.298745  735111 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0916 13:37:42.298758  735111 round_trippers.go:469] Request Headers:
	I0916 13:37:42.298769  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:37:42.298783  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:37:42.298912  735111 main.go:141] libmachine: Successfully made call to close driver server
	I0916 13:37:42.298926  735111 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 13:37:42.309380  735111 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0916 13:37:42.309961  735111 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0916 13:37:42.309977  735111 round_trippers.go:469] Request Headers:
	I0916 13:37:42.309995  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:37:42.310002  735111 round_trippers.go:473]     Content-Type: application/json
	I0916 13:37:42.310007  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:37:42.312517  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:37:42.312659  735111 main.go:141] libmachine: Making call to close driver server
	I0916 13:37:42.312672  735111 main.go:141] libmachine: (ha-190751) Calling .Close
	I0916 13:37:42.312928  735111 main.go:141] libmachine: (ha-190751) DBG | Closing plugin on server side
	I0916 13:37:42.312987  735111 main.go:141] libmachine: Successfully made call to close driver server
	I0916 13:37:42.312999  735111 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 13:37:42.314412  735111 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 13:37:42.315519  735111 addons.go:510] duration metric: took 760.558523ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 13:37:42.315551  735111 start.go:246] waiting for cluster config update ...
	I0916 13:37:42.315562  735111 start.go:255] writing updated cluster config ...
	I0916 13:37:42.316877  735111 out.go:201] 
	I0916 13:37:42.318103  735111 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:37:42.318190  735111 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/config.json ...
	I0916 13:37:42.319697  735111 out.go:177] * Starting "ha-190751-m02" control-plane node in "ha-190751" cluster
	I0916 13:37:42.320749  735111 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 13:37:42.320769  735111 cache.go:56] Caching tarball of preloaded images
	I0916 13:37:42.320856  735111 preload.go:172] Found /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 13:37:42.320868  735111 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 13:37:42.320948  735111 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/config.json ...
	I0916 13:37:42.321110  735111 start.go:360] acquireMachinesLock for ha-190751-m02: {Name:mke8f8f8ba61009cdea7a3d88b50b9f6ae6e1362 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 13:37:42.321159  735111 start.go:364] duration metric: took 30.332µs to acquireMachinesLock for "ha-190751-m02"
	I0916 13:37:42.321183  735111 start.go:93] Provisioning new machine with config: &{Name:ha-190751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-190751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 13:37:42.321267  735111 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0916 13:37:42.322661  735111 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 13:37:42.322741  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:37:42.322780  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:37:42.337055  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42289
	I0916 13:37:42.337532  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:37:42.338027  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:37:42.338044  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:37:42.338383  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:37:42.338609  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetMachineName
	I0916 13:37:42.338757  735111 main.go:141] libmachine: (ha-190751-m02) Calling .DriverName
	I0916 13:37:42.338913  735111 start.go:159] libmachine.API.Create for "ha-190751" (driver="kvm2")
	I0916 13:37:42.338943  735111 client.go:168] LocalClient.Create starting
	I0916 13:37:42.338970  735111 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem
	I0916 13:37:42.339004  735111 main.go:141] libmachine: Decoding PEM data...
	I0916 13:37:42.339021  735111 main.go:141] libmachine: Parsing certificate...
	I0916 13:37:42.339090  735111 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem
	I0916 13:37:42.339114  735111 main.go:141] libmachine: Decoding PEM data...
	I0916 13:37:42.339130  735111 main.go:141] libmachine: Parsing certificate...
	I0916 13:37:42.339155  735111 main.go:141] libmachine: Running pre-create checks...
	I0916 13:37:42.339165  735111 main.go:141] libmachine: (ha-190751-m02) Calling .PreCreateCheck
	I0916 13:37:42.339311  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetConfigRaw
	I0916 13:37:42.339700  735111 main.go:141] libmachine: Creating machine...
	I0916 13:37:42.339713  735111 main.go:141] libmachine: (ha-190751-m02) Calling .Create
	I0916 13:37:42.339867  735111 main.go:141] libmachine: (ha-190751-m02) Creating KVM machine...
	I0916 13:37:42.341059  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found existing default KVM network
	I0916 13:37:42.341247  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found existing private KVM network mk-ha-190751
	I0916 13:37:42.341384  735111 main.go:141] libmachine: (ha-190751-m02) Setting up store path in /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02 ...
	I0916 13:37:42.341417  735111 main.go:141] libmachine: (ha-190751-m02) Building disk image from file:///home/jenkins/minikube-integration/19652-713072/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 13:37:42.341455  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:42.341364  735462 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 13:37:42.341541  735111 main.go:141] libmachine: (ha-190751-m02) Downloading /home/jenkins/minikube-integration/19652-713072/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19652-713072/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 13:37:42.605852  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:42.605728  735462 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/id_rsa...
	I0916 13:37:42.679360  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:42.679197  735462 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/ha-190751-m02.rawdisk...
	I0916 13:37:42.679396  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Writing magic tar header
	I0916 13:37:42.679414  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Writing SSH key tar header
	I0916 13:37:42.679425  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:42.679313  735462 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02 ...
	I0916 13:37:42.679450  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02
	I0916 13:37:42.679459  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072/.minikube/machines
	I0916 13:37:42.679481  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 13:37:42.679495  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072
	I0916 13:37:42.679539  735111 main.go:141] libmachine: (ha-190751-m02) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02 (perms=drwx------)
	I0916 13:37:42.679575  735111 main.go:141] libmachine: (ha-190751-m02) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072/.minikube/machines (perms=drwxr-xr-x)
	I0916 13:37:42.679590  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 13:37:42.679605  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Checking permissions on dir: /home/jenkins
	I0916 13:37:42.679617  735111 main.go:141] libmachine: (ha-190751-m02) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072/.minikube (perms=drwxr-xr-x)
	I0916 13:37:42.679632  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Checking permissions on dir: /home
	I0916 13:37:42.679655  735111 main.go:141] libmachine: (ha-190751-m02) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072 (perms=drwxrwxr-x)
	I0916 13:37:42.679680  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Skipping /home - not owner
	I0916 13:37:42.679689  735111 main.go:141] libmachine: (ha-190751-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 13:37:42.679704  735111 main.go:141] libmachine: (ha-190751-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 13:37:42.679714  735111 main.go:141] libmachine: (ha-190751-m02) Creating domain...
	I0916 13:37:42.680759  735111 main.go:141] libmachine: (ha-190751-m02) define libvirt domain using xml: 
	I0916 13:37:42.680791  735111 main.go:141] libmachine: (ha-190751-m02) <domain type='kvm'>
	I0916 13:37:42.680802  735111 main.go:141] libmachine: (ha-190751-m02)   <name>ha-190751-m02</name>
	I0916 13:37:42.680808  735111 main.go:141] libmachine: (ha-190751-m02)   <memory unit='MiB'>2200</memory>
	I0916 13:37:42.680816  735111 main.go:141] libmachine: (ha-190751-m02)   <vcpu>2</vcpu>
	I0916 13:37:42.680825  735111 main.go:141] libmachine: (ha-190751-m02)   <features>
	I0916 13:37:42.680833  735111 main.go:141] libmachine: (ha-190751-m02)     <acpi/>
	I0916 13:37:42.680842  735111 main.go:141] libmachine: (ha-190751-m02)     <apic/>
	I0916 13:37:42.680849  735111 main.go:141] libmachine: (ha-190751-m02)     <pae/>
	I0916 13:37:42.680857  735111 main.go:141] libmachine: (ha-190751-m02)     
	I0916 13:37:42.680864  735111 main.go:141] libmachine: (ha-190751-m02)   </features>
	I0916 13:37:42.680878  735111 main.go:141] libmachine: (ha-190751-m02)   <cpu mode='host-passthrough'>
	I0916 13:37:42.680888  735111 main.go:141] libmachine: (ha-190751-m02)   
	I0916 13:37:42.680897  735111 main.go:141] libmachine: (ha-190751-m02)   </cpu>
	I0916 13:37:42.680904  735111 main.go:141] libmachine: (ha-190751-m02)   <os>
	I0916 13:37:42.680912  735111 main.go:141] libmachine: (ha-190751-m02)     <type>hvm</type>
	I0916 13:37:42.680919  735111 main.go:141] libmachine: (ha-190751-m02)     <boot dev='cdrom'/>
	I0916 13:37:42.680928  735111 main.go:141] libmachine: (ha-190751-m02)     <boot dev='hd'/>
	I0916 13:37:42.680936  735111 main.go:141] libmachine: (ha-190751-m02)     <bootmenu enable='no'/>
	I0916 13:37:42.680947  735111 main.go:141] libmachine: (ha-190751-m02)   </os>
	I0916 13:37:42.680974  735111 main.go:141] libmachine: (ha-190751-m02)   <devices>
	I0916 13:37:42.680993  735111 main.go:141] libmachine: (ha-190751-m02)     <disk type='file' device='cdrom'>
	I0916 13:37:42.681005  735111 main.go:141] libmachine: (ha-190751-m02)       <source file='/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/boot2docker.iso'/>
	I0916 13:37:42.681015  735111 main.go:141] libmachine: (ha-190751-m02)       <target dev='hdc' bus='scsi'/>
	I0916 13:37:42.681025  735111 main.go:141] libmachine: (ha-190751-m02)       <readonly/>
	I0916 13:37:42.681039  735111 main.go:141] libmachine: (ha-190751-m02)     </disk>
	I0916 13:37:42.681049  735111 main.go:141] libmachine: (ha-190751-m02)     <disk type='file' device='disk'>
	I0916 13:37:42.681061  735111 main.go:141] libmachine: (ha-190751-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 13:37:42.681074  735111 main.go:141] libmachine: (ha-190751-m02)       <source file='/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/ha-190751-m02.rawdisk'/>
	I0916 13:37:42.681084  735111 main.go:141] libmachine: (ha-190751-m02)       <target dev='hda' bus='virtio'/>
	I0916 13:37:42.681092  735111 main.go:141] libmachine: (ha-190751-m02)     </disk>
	I0916 13:37:42.681101  735111 main.go:141] libmachine: (ha-190751-m02)     <interface type='network'>
	I0916 13:37:42.681107  735111 main.go:141] libmachine: (ha-190751-m02)       <source network='mk-ha-190751'/>
	I0916 13:37:42.681113  735111 main.go:141] libmachine: (ha-190751-m02)       <model type='virtio'/>
	I0916 13:37:42.681119  735111 main.go:141] libmachine: (ha-190751-m02)     </interface>
	I0916 13:37:42.681128  735111 main.go:141] libmachine: (ha-190751-m02)     <interface type='network'>
	I0916 13:37:42.681156  735111 main.go:141] libmachine: (ha-190751-m02)       <source network='default'/>
	I0916 13:37:42.681171  735111 main.go:141] libmachine: (ha-190751-m02)       <model type='virtio'/>
	I0916 13:37:42.681184  735111 main.go:141] libmachine: (ha-190751-m02)     </interface>
	I0916 13:37:42.681193  735111 main.go:141] libmachine: (ha-190751-m02)     <serial type='pty'>
	I0916 13:37:42.681201  735111 main.go:141] libmachine: (ha-190751-m02)       <target port='0'/>
	I0916 13:37:42.681211  735111 main.go:141] libmachine: (ha-190751-m02)     </serial>
	I0916 13:37:42.681219  735111 main.go:141] libmachine: (ha-190751-m02)     <console type='pty'>
	I0916 13:37:42.681229  735111 main.go:141] libmachine: (ha-190751-m02)       <target type='serial' port='0'/>
	I0916 13:37:42.681262  735111 main.go:141] libmachine: (ha-190751-m02)     </console>
	I0916 13:37:42.681289  735111 main.go:141] libmachine: (ha-190751-m02)     <rng model='virtio'>
	I0916 13:37:42.681304  735111 main.go:141] libmachine: (ha-190751-m02)       <backend model='random'>/dev/random</backend>
	I0916 13:37:42.681317  735111 main.go:141] libmachine: (ha-190751-m02)     </rng>
	I0916 13:37:42.681334  735111 main.go:141] libmachine: (ha-190751-m02)     
	I0916 13:37:42.681345  735111 main.go:141] libmachine: (ha-190751-m02)     
	I0916 13:37:42.681360  735111 main.go:141] libmachine: (ha-190751-m02)   </devices>
	I0916 13:37:42.681369  735111 main.go:141] libmachine: (ha-190751-m02) </domain>
	I0916 13:37:42.681380  735111 main.go:141] libmachine: (ha-190751-m02) 
	I0916 13:37:42.688231  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:1f:6c:3b in network default
	I0916 13:37:42.689057  735111 main.go:141] libmachine: (ha-190751-m02) Ensuring networks are active...
	I0916 13:37:42.689085  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:42.689818  735111 main.go:141] libmachine: (ha-190751-m02) Ensuring network default is active
	I0916 13:37:42.690179  735111 main.go:141] libmachine: (ha-190751-m02) Ensuring network mk-ha-190751 is active
	I0916 13:37:42.690645  735111 main.go:141] libmachine: (ha-190751-m02) Getting domain xml...
	I0916 13:37:42.691437  735111 main.go:141] libmachine: (ha-190751-m02) Creating domain...
	I0916 13:37:43.942323  735111 main.go:141] libmachine: (ha-190751-m02) Waiting to get IP...
	I0916 13:37:43.943256  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:43.943656  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:37:43.943679  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:43.943635  735462 retry.go:31] will retry after 295.084615ms: waiting for machine to come up
	I0916 13:37:44.240016  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:44.240562  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:37:44.240586  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:44.240509  735462 retry.go:31] will retry after 383.461675ms: waiting for machine to come up
	I0916 13:37:44.626046  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:44.626530  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:37:44.626563  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:44.626470  735462 retry.go:31] will retry after 438.005593ms: waiting for machine to come up
	I0916 13:37:45.066175  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:45.066684  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:37:45.066718  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:45.066618  735462 retry.go:31] will retry after 459.760025ms: waiting for machine to come up
	I0916 13:37:45.527795  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:45.528205  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:37:45.528228  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:45.528177  735462 retry.go:31] will retry after 749.840232ms: waiting for machine to come up
	I0916 13:37:46.279851  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:46.280287  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:37:46.280315  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:46.280234  735462 retry.go:31] will retry after 717.950644ms: waiting for machine to come up
	I0916 13:37:47.000301  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:47.000697  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:37:47.000721  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:47.000641  735462 retry.go:31] will retry after 1.10090672s: waiting for machine to come up
	I0916 13:37:48.102653  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:48.102982  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:37:48.103004  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:48.102932  735462 retry.go:31] will retry after 1.357065606s: waiting for machine to come up
	I0916 13:37:49.461205  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:49.461635  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:37:49.461685  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:49.461593  735462 retry.go:31] will retry after 1.820123754s: waiting for machine to come up
	I0916 13:37:51.284728  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:51.285283  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:37:51.285313  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:51.285227  735462 retry.go:31] will retry after 1.535295897s: waiting for machine to come up
	I0916 13:37:52.821910  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:52.822436  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:37:52.822464  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:52.822416  735462 retry.go:31] will retry after 2.276365416s: waiting for machine to come up
	I0916 13:37:55.101849  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:55.102243  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:37:55.102271  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:55.102193  735462 retry.go:31] will retry after 2.597037824s: waiting for machine to come up
	I0916 13:37:57.701131  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:57.701738  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:37:57.701763  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:57.701687  735462 retry.go:31] will retry after 3.553511192s: waiting for machine to come up
	I0916 13:38:01.259301  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:01.259684  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:38:01.259715  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:38:01.259645  735462 retry.go:31] will retry after 3.46552714s: waiting for machine to come up
	I0916 13:38:04.728155  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:04.728583  735111 main.go:141] libmachine: (ha-190751-m02) Found IP for machine: 192.168.39.192
	I0916 13:38:04.728609  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has current primary IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:04.728617  735111 main.go:141] libmachine: (ha-190751-m02) Reserving static IP address...
	I0916 13:38:04.729005  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find host DHCP lease matching {name: "ha-190751-m02", mac: "52:54:00:41:52:c1", ip: "192.168.39.192"} in network mk-ha-190751
	I0916 13:38:04.800262  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Getting to WaitForSSH function...
	I0916 13:38:04.800290  735111 main.go:141] libmachine: (ha-190751-m02) Reserved static IP address: 192.168.39.192
	I0916 13:38:04.800302  735111 main.go:141] libmachine: (ha-190751-m02) Waiting for SSH to be available...
	I0916 13:38:04.803047  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:04.803493  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:minikube Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:04.803526  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:04.803734  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Using SSH client type: external
	I0916 13:38:04.803780  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/id_rsa (-rw-------)
	I0916 13:38:04.803812  735111 main.go:141] libmachine: (ha-190751-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.192 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 13:38:04.803824  735111 main.go:141] libmachine: (ha-190751-m02) DBG | About to run SSH command:
	I0916 13:38:04.803871  735111 main.go:141] libmachine: (ha-190751-m02) DBG | exit 0
	I0916 13:38:04.925602  735111 main.go:141] libmachine: (ha-190751-m02) DBG | SSH cmd err, output: <nil>: 
	I0916 13:38:04.925905  735111 main.go:141] libmachine: (ha-190751-m02) KVM machine creation complete!
	I0916 13:38:04.926193  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetConfigRaw
	I0916 13:38:04.926774  735111 main.go:141] libmachine: (ha-190751-m02) Calling .DriverName
	I0916 13:38:04.926972  735111 main.go:141] libmachine: (ha-190751-m02) Calling .DriverName
	I0916 13:38:04.927113  735111 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 13:38:04.927130  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetState
	I0916 13:38:04.928234  735111 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 13:38:04.928251  735111 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 13:38:04.928259  735111 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 13:38:04.928267  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:38:04.930468  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:04.930807  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:04.930844  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:04.930986  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:38:04.931135  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:04.931283  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:04.931393  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:38:04.931559  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:38:04.931790  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I0916 13:38:04.931805  735111 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 13:38:05.032794  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 13:38:05.032819  735111 main.go:141] libmachine: Detecting the provisioner...
	I0916 13:38:05.032830  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:38:05.035714  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.036055  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:05.036083  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.036200  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:38:05.036385  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:05.036548  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:05.036685  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:38:05.036859  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:38:05.037049  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I0916 13:38:05.037060  735111 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 13:38:05.137958  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 13:38:05.138060  735111 main.go:141] libmachine: found compatible host: buildroot
	I0916 13:38:05.138074  735111 main.go:141] libmachine: Provisioning with buildroot...
	I0916 13:38:05.138088  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetMachineName
	I0916 13:38:05.138310  735111 buildroot.go:166] provisioning hostname "ha-190751-m02"
	I0916 13:38:05.138334  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetMachineName
	I0916 13:38:05.138539  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:38:05.140899  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.141226  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:05.141244  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.141396  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:38:05.141566  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:05.141738  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:05.141881  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:38:05.142038  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:38:05.142199  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I0916 13:38:05.142210  735111 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-190751-m02 && echo "ha-190751-m02" | sudo tee /etc/hostname
	I0916 13:38:05.259526  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-190751-m02
	
	I0916 13:38:05.259556  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:38:05.262559  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.262928  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:05.262955  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.263147  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:38:05.263355  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:05.263516  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:05.263659  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:38:05.263848  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:38:05.264041  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I0916 13:38:05.264058  735111 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-190751-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-190751-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-190751-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 13:38:05.373840  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 13:38:05.373870  735111 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19652-713072/.minikube CaCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19652-713072/.minikube}
	I0916 13:38:05.373890  735111 buildroot.go:174] setting up certificates
	I0916 13:38:05.373901  735111 provision.go:84] configureAuth start
	I0916 13:38:05.373914  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetMachineName
	I0916 13:38:05.374195  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetIP
	I0916 13:38:05.377605  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.377980  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:05.378007  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.378166  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:38:05.380495  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.380835  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:05.380864  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.381025  735111 provision.go:143] copyHostCerts
	I0916 13:38:05.381056  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem
	I0916 13:38:05.381083  735111 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem, removing ...
	I0916 13:38:05.381092  735111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem
	I0916 13:38:05.381156  735111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem (1082 bytes)
	I0916 13:38:05.381241  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem
	I0916 13:38:05.381259  735111 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem, removing ...
	I0916 13:38:05.381263  735111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem
	I0916 13:38:05.381289  735111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem (1123 bytes)
	I0916 13:38:05.381346  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem
	I0916 13:38:05.381363  735111 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem, removing ...
	I0916 13:38:05.381369  735111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem
	I0916 13:38:05.381391  735111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem (1679 bytes)
	I0916 13:38:05.381452  735111 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem org=jenkins.ha-190751-m02 san=[127.0.0.1 192.168.39.192 ha-190751-m02 localhost minikube]
	I0916 13:38:05.637241  735111 provision.go:177] copyRemoteCerts
	I0916 13:38:05.637298  735111 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 13:38:05.637322  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:38:05.639811  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.640189  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:05.640221  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.640337  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:38:05.640528  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:05.640702  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:38:05.640863  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/id_rsa Username:docker}
	I0916 13:38:05.723650  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 13:38:05.723719  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 13:38:05.750479  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 13:38:05.750550  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 13:38:05.773752  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 13:38:05.773855  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 13:38:05.796174  735111 provision.go:87] duration metric: took 422.260451ms to configureAuth
	I0916 13:38:05.796199  735111 buildroot.go:189] setting minikube options for container-runtime
	I0916 13:38:05.796381  735111 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:38:05.796473  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:38:05.798924  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.799224  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:05.799253  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.799446  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:38:05.799646  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:05.799813  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:05.799976  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:38:05.800123  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:38:05.800291  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I0916 13:38:05.800306  735111 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 13:38:06.020208  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 13:38:06.020242  735111 main.go:141] libmachine: Checking connection to Docker...
	I0916 13:38:06.020252  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetURL
	I0916 13:38:06.021653  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Using libvirt version 6000000
	I0916 13:38:06.024072  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.024436  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:06.024466  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.024576  735111 main.go:141] libmachine: Docker is up and running!
	I0916 13:38:06.024590  735111 main.go:141] libmachine: Reticulating splines...
	I0916 13:38:06.024599  735111 client.go:171] duration metric: took 23.685647791s to LocalClient.Create
	I0916 13:38:06.024624  735111 start.go:167] duration metric: took 23.685713191s to libmachine.API.Create "ha-190751"
	I0916 13:38:06.024636  735111 start.go:293] postStartSetup for "ha-190751-m02" (driver="kvm2")
	I0916 13:38:06.024648  735111 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 13:38:06.024674  735111 main.go:141] libmachine: (ha-190751-m02) Calling .DriverName
	I0916 13:38:06.024937  735111 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 13:38:06.024957  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:38:06.026882  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.027186  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:06.027211  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.027329  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:38:06.027492  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:06.027649  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:38:06.027787  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/id_rsa Username:docker}
	I0916 13:38:06.107825  735111 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 13:38:06.112226  735111 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 13:38:06.112253  735111 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/addons for local assets ...
	I0916 13:38:06.112340  735111 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/files for local assets ...
	I0916 13:38:06.112437  735111 filesync.go:149] local asset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> 7205442.pem in /etc/ssl/certs
	I0916 13:38:06.112449  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> /etc/ssl/certs/7205442.pem
	I0916 13:38:06.112528  735111 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 13:38:06.121914  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 13:38:06.145214  735111 start.go:296] duration metric: took 120.567037ms for postStartSetup
	I0916 13:38:06.145254  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetConfigRaw
	I0916 13:38:06.145854  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetIP
	I0916 13:38:06.148213  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.148585  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:06.148613  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.148814  735111 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/config.json ...
	I0916 13:38:06.149003  735111 start.go:128] duration metric: took 23.827724525s to createHost
	I0916 13:38:06.149027  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:38:06.151115  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.151449  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:06.151485  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.151581  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:38:06.151739  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:06.151861  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:06.151984  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:38:06.152149  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:38:06.152361  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I0916 13:38:06.152376  735111 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 13:38:06.254031  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726493886.213043192
	
	I0916 13:38:06.254059  735111 fix.go:216] guest clock: 1726493886.213043192
	I0916 13:38:06.254069  735111 fix.go:229] Guest: 2024-09-16 13:38:06.213043192 +0000 UTC Remote: 2024-09-16 13:38:06.149015328 +0000 UTC m=+69.504919332 (delta=64.027864ms)
	I0916 13:38:06.254094  735111 fix.go:200] guest clock delta is within tolerance: 64.027864ms
	I0916 13:38:06.254103  735111 start.go:83] releasing machines lock for "ha-190751-m02", held for 23.932931473s
	I0916 13:38:06.254131  735111 main.go:141] libmachine: (ha-190751-m02) Calling .DriverName
	I0916 13:38:06.254359  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetIP
	I0916 13:38:06.256826  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.257114  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:06.257145  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.259449  735111 out.go:177] * Found network options:
	I0916 13:38:06.260782  735111 out.go:177]   - NO_PROXY=192.168.39.94
	W0916 13:38:06.261938  735111 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 13:38:06.261970  735111 main.go:141] libmachine: (ha-190751-m02) Calling .DriverName
	I0916 13:38:06.262427  735111 main.go:141] libmachine: (ha-190751-m02) Calling .DriverName
	I0916 13:38:06.262614  735111 main.go:141] libmachine: (ha-190751-m02) Calling .DriverName
	I0916 13:38:06.262735  735111 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 13:38:06.262778  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	W0916 13:38:06.262835  735111 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 13:38:06.262925  735111 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 13:38:06.262946  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:38:06.265374  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.265773  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:06.265797  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.265852  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.265915  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:38:06.266074  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:06.266214  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:38:06.266309  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:06.266322  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/id_rsa Username:docker}
	I0916 13:38:06.266330  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.266465  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:38:06.266569  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:06.266688  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:38:06.266825  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/id_rsa Username:docker}
	I0916 13:38:06.504116  735111 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 13:38:06.509809  735111 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 13:38:06.509877  735111 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 13:38:06.527632  735111 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 13:38:06.527657  735111 start.go:495] detecting cgroup driver to use...
	I0916 13:38:06.527782  735111 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 13:38:06.544086  735111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 13:38:06.557351  735111 docker.go:217] disabling cri-docker service (if available) ...
	I0916 13:38:06.557400  735111 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 13:38:06.570277  735111 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 13:38:06.583266  735111 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 13:38:06.703947  735111 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 13:38:06.860845  735111 docker.go:233] disabling docker service ...
	I0916 13:38:06.860920  735111 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 13:38:06.884863  735111 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 13:38:06.897537  735111 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 13:38:07.025766  735111 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 13:38:07.141630  735111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 13:38:07.155310  735111 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 13:38:07.173092  735111 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 13:38:07.173165  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:38:07.183550  735111 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 13:38:07.183607  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:38:07.193383  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:38:07.203087  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:38:07.214974  735111 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 13:38:07.225114  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:38:07.234675  735111 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:38:07.252702  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:38:07.262650  735111 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 13:38:07.271745  735111 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 13:38:07.271787  735111 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 13:38:07.284119  735111 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 13:38:07.293938  735111 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 13:38:07.404511  735111 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 13:38:07.493651  735111 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 13:38:07.493733  735111 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 13:38:07.498368  735111 start.go:563] Will wait 60s for crictl version
	I0916 13:38:07.498416  735111 ssh_runner.go:195] Run: which crictl
	I0916 13:38:07.501982  735111 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 13:38:07.540227  735111 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 13:38:07.540325  735111 ssh_runner.go:195] Run: crio --version
	I0916 13:38:07.567997  735111 ssh_runner.go:195] Run: crio --version
	I0916 13:38:07.597231  735111 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 13:38:07.598490  735111 out.go:177]   - env NO_PROXY=192.168.39.94
	I0916 13:38:07.599534  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetIP
	I0916 13:38:07.602146  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:07.602513  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:07.602537  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:07.602694  735111 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 13:38:07.606644  735111 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 13:38:07.619430  735111 mustload.go:65] Loading cluster: ha-190751
	I0916 13:38:07.619642  735111 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:38:07.619896  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:38:07.619936  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:38:07.634458  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35067
	I0916 13:38:07.634853  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:38:07.635286  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:38:07.635307  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:38:07.635623  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:38:07.635817  735111 main.go:141] libmachine: (ha-190751) Calling .GetState
	I0916 13:38:07.637120  735111 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:38:07.637408  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:38:07.637440  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:38:07.651391  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44839
	I0916 13:38:07.651748  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:38:07.652159  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:38:07.652180  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:38:07.652503  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:38:07.652658  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:38:07.652807  735111 certs.go:68] Setting up /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751 for IP: 192.168.39.192
	I0916 13:38:07.652823  735111 certs.go:194] generating shared ca certs ...
	I0916 13:38:07.652839  735111 certs.go:226] acquiring lock for ca certs: {Name:mk25b35916ff3ff3777938e3e2b7794965f8a707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:38:07.652987  735111 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key
	I0916 13:38:07.653037  735111 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key
	I0916 13:38:07.653049  735111 certs.go:256] generating profile certs ...
	I0916 13:38:07.653138  735111 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.key
	I0916 13:38:07.653170  735111 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.8feb7412
	I0916 13:38:07.653190  735111 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.8feb7412 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.94 192.168.39.192 192.168.39.254]
	I0916 13:38:07.764013  735111 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.8feb7412 ...
	I0916 13:38:07.764044  735111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.8feb7412: {Name:mk58560f2a84b27105eff3bc12cf91cf12104359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:38:07.764267  735111 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.8feb7412 ...
	I0916 13:38:07.764285  735111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.8feb7412: {Name:mk657f19070c49dca56345e0ae2a1dcf27308040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:38:07.764391  735111 certs.go:381] copying /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.8feb7412 -> /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt
	I0916 13:38:07.764569  735111 certs.go:385] copying /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.8feb7412 -> /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key
	I0916 13:38:07.764766  735111 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key
	I0916 13:38:07.764785  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 13:38:07.764804  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 13:38:07.764831  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 13:38:07.764848  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 13:38:07.764865  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 13:38:07.764879  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 13:38:07.764896  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 13:38:07.764913  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 13:38:07.764992  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem (1338 bytes)
	W0916 13:38:07.765036  735111 certs.go:480] ignoring /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544_empty.pem, impossibly tiny 0 bytes
	I0916 13:38:07.765050  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 13:38:07.765080  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem (1082 bytes)
	I0916 13:38:07.765113  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem (1123 bytes)
	I0916 13:38:07.765145  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem (1679 bytes)
	I0916 13:38:07.765197  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 13:38:07.765232  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> /usr/share/ca-certificates/7205442.pem
	I0916 13:38:07.765253  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:38:07.765271  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem -> /usr/share/ca-certificates/720544.pem
	I0916 13:38:07.765309  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:38:07.767870  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:38:07.768261  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:38:07.768284  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:38:07.768510  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:38:07.768700  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:38:07.768842  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:38:07.768975  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:38:07.849931  735111 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 13:38:07.855030  735111 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 13:38:07.866970  735111 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 13:38:07.871340  735111 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0916 13:38:07.883132  735111 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 13:38:07.887581  735111 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 13:38:07.898269  735111 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 13:38:07.902673  735111 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0916 13:38:07.913972  735111 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 13:38:07.918388  735111 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 13:38:07.928944  735111 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 13:38:07.933508  735111 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0916 13:38:07.943498  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 13:38:07.968310  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 13:38:07.991824  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 13:38:08.014029  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 13:38:08.036224  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 13:38:08.058343  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 13:38:08.080985  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 13:38:08.103508  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 13:38:08.125691  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /usr/share/ca-certificates/7205442.pem (1708 bytes)
	I0916 13:38:08.148890  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 13:38:08.170558  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem --> /usr/share/ca-certificates/720544.pem (1338 bytes)
	I0916 13:38:08.192449  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 13:38:08.208626  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0916 13:38:08.227317  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 13:38:08.246057  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0916 13:38:08.262149  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 13:38:08.277743  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0916 13:38:08.294944  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 13:38:08.310828  735111 ssh_runner.go:195] Run: openssl version
	I0916 13:38:08.316330  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 13:38:08.326533  735111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:38:08.330848  735111 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 12:53 /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:38:08.330904  735111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:38:08.336356  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 13:38:08.346444  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/720544.pem && ln -fs /usr/share/ca-certificates/720544.pem /etc/ssl/certs/720544.pem"
	I0916 13:38:08.356609  735111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/720544.pem
	I0916 13:38:08.360738  735111 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 13:33 /usr/share/ca-certificates/720544.pem
	I0916 13:38:08.360786  735111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/720544.pem
	I0916 13:38:08.366029  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/720544.pem /etc/ssl/certs/51391683.0"
	I0916 13:38:08.376215  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7205442.pem && ln -fs /usr/share/ca-certificates/7205442.pem /etc/ssl/certs/7205442.pem"
	I0916 13:38:08.386857  735111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7205442.pem
	I0916 13:38:08.391761  735111 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 13:33 /usr/share/ca-certificates/7205442.pem
	I0916 13:38:08.391820  735111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7205442.pem
	I0916 13:38:08.397361  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7205442.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 13:38:08.409079  735111 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 13:38:08.413300  735111 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 13:38:08.413358  735111 kubeadm.go:934] updating node {m02 192.168.39.192 8443 v1.31.1 crio true true} ...
	I0916 13:38:08.413457  735111 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-190751-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.192
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-190751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 13:38:08.413482  735111 kube-vip.go:115] generating kube-vip config ...
	I0916 13:38:08.413511  735111 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 13:38:08.431179  735111 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 13:38:08.431241  735111 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 13:38:08.431287  735111 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 13:38:08.441183  735111 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 13:38:08.441223  735111 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 13:38:08.450679  735111 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 13:38:08.450701  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 13:38:08.450754  735111 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 13:38:08.450842  735111 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0916 13:38:08.450894  735111 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0916 13:38:08.454948  735111 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0916 13:38:08.454974  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 13:38:09.088897  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 13:38:09.089006  735111 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 13:38:09.093915  735111 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0916 13:38:09.093953  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 13:38:09.262028  735111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:38:09.298220  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 13:38:09.298340  735111 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 13:38:09.305048  735111 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0916 13:38:09.305086  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 13:38:09.689691  735111 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 13:38:09.699624  735111 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0916 13:38:09.715725  735111 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 13:38:09.733713  735111 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0916 13:38:09.751995  735111 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 13:38:09.755951  735111 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 13:38:09.768309  735111 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 13:38:09.903306  735111 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 13:38:09.921100  735111 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:38:09.921542  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:38:09.921603  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:38:09.937177  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37265
	I0916 13:38:09.937561  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:38:09.938063  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:38:09.938092  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:38:09.938518  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:38:09.938725  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:38:09.938876  735111 start.go:317] joinCluster: &{Name:ha-190751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-190751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.192 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 13:38:09.938973  735111 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 13:38:09.938988  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:38:09.942383  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:38:09.942918  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:38:09.942952  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:38:09.943199  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:38:09.943406  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:38:09.943587  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:38:09.943737  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:38:10.088194  735111 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.192 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 13:38:10.088240  735111 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token whudzs.boc3qvd5sgl21n61 --discovery-token-ca-cert-hash sha256:40463d1766828cd98d0b3d82eb62b65ad46ddd558da2fd9e3536672d6eade3c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-190751-m02 --control-plane --apiserver-advertise-address=192.168.39.192 --apiserver-bind-port=8443"
	I0916 13:38:31.686672  735111 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token whudzs.boc3qvd5sgl21n61 --discovery-token-ca-cert-hash sha256:40463d1766828cd98d0b3d82eb62b65ad46ddd558da2fd9e3536672d6eade3c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-190751-m02 --control-plane --apiserver-advertise-address=192.168.39.192 --apiserver-bind-port=8443": (21.59840385s)
	I0916 13:38:31.686721  735111 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 13:38:32.210939  735111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-190751-m02 minikube.k8s.io/updated_at=2024_09_16T13_38_32_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86 minikube.k8s.io/name=ha-190751 minikube.k8s.io/primary=false
	I0916 13:38:32.330736  735111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-190751-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 13:38:32.473220  735111 start.go:319] duration metric: took 22.53433791s to joinCluster
	I0916 13:38:32.473301  735111 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.192 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 13:38:32.473638  735111 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:38:32.475796  735111 out.go:177] * Verifying Kubernetes components...
	I0916 13:38:32.477071  735111 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 13:38:32.708074  735111 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 13:38:32.732989  735111 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19652-713072/kubeconfig
	I0916 13:38:32.733289  735111 kapi.go:59] client config for ha-190751: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.crt", KeyFile:"/home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.key", CAFile:"/home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 13:38:32.733358  735111 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.94:8443
	I0916 13:38:32.733654  735111 node_ready.go:35] waiting up to 6m0s for node "ha-190751-m02" to be "Ready" ...
	I0916 13:38:32.733792  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:32.733802  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:32.733816  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:32.733821  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:32.743487  735111 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0916 13:38:33.234052  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:33.234084  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:33.234096  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:33.234101  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:33.248083  735111 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0916 13:38:33.733904  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:33.733929  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:33.733942  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:33.733947  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:33.738779  735111 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 13:38:34.234664  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:34.234686  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:34.234693  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:34.234698  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:34.239999  735111 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 13:38:34.734843  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:34.734865  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:34.734877  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:34.734880  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:34.738691  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:34.739212  735111 node_ready.go:53] node "ha-190751-m02" has status "Ready":"False"
	I0916 13:38:35.234902  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:35.234925  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:35.234933  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:35.234937  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:35.248275  735111 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0916 13:38:35.733866  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:35.733890  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:35.733899  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:35.733903  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:35.737774  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:36.234952  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:36.234978  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:36.234987  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:36.234991  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:36.239485  735111 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 13:38:36.733892  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:36.733924  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:36.733935  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:36.733942  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:36.737219  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:37.234760  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:37.234784  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:37.234793  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:37.234797  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:37.237476  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:37.238039  735111 node_ready.go:53] node "ha-190751-m02" has status "Ready":"False"
	I0916 13:38:37.734751  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:37.734776  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:37.734787  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:37.734793  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:37.737512  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:38.234526  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:38.234555  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:38.234566  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:38.234571  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:38.237472  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:38.734671  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:38.734693  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:38.734701  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:38.734704  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:38.738203  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:39.233903  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:39.233930  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:39.233939  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:39.233945  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:39.238849  735111 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 13:38:39.239407  735111 node_ready.go:53] node "ha-190751-m02" has status "Ready":"False"
	I0916 13:38:39.734899  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:39.734925  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:39.734934  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:39.734939  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:39.737985  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:40.234645  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:40.234672  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:40.234681  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:40.234685  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:40.239039  735111 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 13:38:40.734018  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:40.734050  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:40.734062  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:40.734067  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:40.737361  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:41.234709  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:41.234731  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:41.234738  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:41.234742  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:41.238698  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:41.734406  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:41.734430  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:41.734441  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:41.734447  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:41.737719  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:41.738587  735111 node_ready.go:53] node "ha-190751-m02" has status "Ready":"False"
	I0916 13:38:42.234046  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:42.234072  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:42.234090  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:42.234096  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:42.237631  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:42.734809  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:42.734833  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:42.734841  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:42.734846  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:42.738196  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:43.234205  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:43.234231  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:43.234241  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:43.234245  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:43.238473  735111 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 13:38:43.734653  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:43.734681  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:43.734693  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:43.734700  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:43.737734  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:44.234881  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:44.234907  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:44.234923  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:44.234930  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:44.237991  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:44.238553  735111 node_ready.go:53] node "ha-190751-m02" has status "Ready":"False"
	I0916 13:38:44.733911  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:44.733933  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:44.733941  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:44.733945  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:44.736682  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:45.233969  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:45.233992  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:45.234000  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:45.234005  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:45.237902  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:45.734865  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:45.734888  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:45.734899  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:45.734902  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:45.738198  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:46.233935  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:46.233961  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:46.233972  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:46.233979  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:46.237819  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:46.238605  735111 node_ready.go:53] node "ha-190751-m02" has status "Ready":"False"
	I0916 13:38:46.733950  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:46.733974  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:46.733987  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:46.733995  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:46.737023  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:47.234426  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:47.234450  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:47.234458  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:47.234461  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:47.237977  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:47.238653  735111 node_ready.go:49] node "ha-190751-m02" has status "Ready":"True"
	I0916 13:38:47.238672  735111 node_ready.go:38] duration metric: took 14.50498186s for node "ha-190751-m02" to be "Ready" ...
	I0916 13:38:47.238681  735111 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 13:38:47.238758  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods
	I0916 13:38:47.238770  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:47.238779  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:47.238781  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:47.241850  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:47.249481  735111 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9lw8n" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:47.249553  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw8n
	I0916 13:38:47.249562  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:47.249571  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:47.249575  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:47.251850  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:47.252467  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:38:47.252484  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:47.252493  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:47.252500  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:47.254527  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:47.254963  735111 pod_ready.go:93] pod "coredns-7c65d6cfc9-9lw8n" in "kube-system" namespace has status "Ready":"True"
	I0916 13:38:47.254978  735111 pod_ready.go:82] duration metric: took 5.476574ms for pod "coredns-7c65d6cfc9-9lw8n" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:47.254986  735111 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gzkpj" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:47.255032  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-gzkpj
	I0916 13:38:47.255039  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:47.255047  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:47.255049  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:47.256840  735111 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 13:38:47.257430  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:38:47.257444  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:47.257451  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:47.257455  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:47.259455  735111 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 13:38:47.260052  735111 pod_ready.go:93] pod "coredns-7c65d6cfc9-gzkpj" in "kube-system" namespace has status "Ready":"True"
	I0916 13:38:47.260066  735111 pod_ready.go:82] duration metric: took 5.074604ms for pod "coredns-7c65d6cfc9-gzkpj" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:47.260075  735111 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:47.260116  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/etcd-ha-190751
	I0916 13:38:47.260124  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:47.260130  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:47.260134  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:47.262250  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:47.262686  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:38:47.262699  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:47.262706  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:47.262710  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:47.264543  735111 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 13:38:47.264871  735111 pod_ready.go:93] pod "etcd-ha-190751" in "kube-system" namespace has status "Ready":"True"
	I0916 13:38:47.264885  735111 pod_ready.go:82] duration metric: took 4.80542ms for pod "etcd-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:47.264893  735111 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:47.264930  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/etcd-ha-190751-m02
	I0916 13:38:47.264937  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:47.264943  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:47.264946  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:47.266896  735111 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 13:38:47.267650  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:47.267664  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:47.267671  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:47.267676  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:47.269655  735111 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 13:38:47.270430  735111 pod_ready.go:93] pod "etcd-ha-190751-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 13:38:47.270447  735111 pod_ready.go:82] duration metric: took 5.54867ms for pod "etcd-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:47.270464  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:47.434908  735111 request.go:632] Waited for 164.351719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-190751
	I0916 13:38:47.434966  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-190751
	I0916 13:38:47.434972  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:47.434979  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:47.434982  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:47.437981  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:47.635096  735111 request.go:632] Waited for 196.347109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:38:47.635183  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:38:47.635190  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:47.635200  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:47.635209  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:47.637835  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:47.638549  735111 pod_ready.go:93] pod "kube-apiserver-ha-190751" in "kube-system" namespace has status "Ready":"True"
	I0916 13:38:47.638573  735111 pod_ready.go:82] duration metric: took 368.102477ms for pod "kube-apiserver-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:47.638583  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:47.835392  735111 request.go:632] Waited for 196.733194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-190751-m02
	I0916 13:38:47.835483  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-190751-m02
	I0916 13:38:47.835488  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:47.835496  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:47.835500  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:47.838587  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:48.034836  735111 request.go:632] Waited for 195.365767ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:48.034892  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:48.034897  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:48.034904  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:48.034909  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:48.037912  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:48.038587  735111 pod_ready.go:93] pod "kube-apiserver-ha-190751-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 13:38:48.038604  735111 pod_ready.go:82] duration metric: took 400.01422ms for pod "kube-apiserver-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:48.038612  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:48.234735  735111 request.go:632] Waited for 196.056514ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-190751
	I0916 13:38:48.234801  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-190751
	I0916 13:38:48.234806  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:48.234813  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:48.234817  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:48.237710  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:48.434847  735111 request.go:632] Waited for 196.364736ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:38:48.434931  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:38:48.434937  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:48.434945  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:48.434949  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:48.438033  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:48.438805  735111 pod_ready.go:93] pod "kube-controller-manager-ha-190751" in "kube-system" namespace has status "Ready":"True"
	I0916 13:38:48.438826  735111 pod_ready.go:82] duration metric: took 400.207153ms for pod "kube-controller-manager-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:48.438836  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:48.634856  735111 request.go:632] Waited for 195.950058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-190751-m02
	I0916 13:38:48.634915  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-190751-m02
	I0916 13:38:48.634922  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:48.634930  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:48.634934  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:48.638002  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:48.835350  735111 request.go:632] Waited for 196.358659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:48.835415  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:48.835421  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:48.835427  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:48.835431  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:48.838502  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:48.839040  735111 pod_ready.go:93] pod "kube-controller-manager-ha-190751-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 13:38:48.839057  735111 pod_ready.go:82] duration metric: took 400.214991ms for pod "kube-controller-manager-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:48.839066  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-24q9n" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:49.035145  735111 request.go:632] Waited for 195.967255ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-proxy-24q9n
	I0916 13:38:49.035205  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-proxy-24q9n
	I0916 13:38:49.035211  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:49.035219  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:49.035224  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:49.038680  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:49.234891  735111 request.go:632] Waited for 195.359474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:49.234967  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:49.234972  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:49.234980  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:49.234984  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:49.238513  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:49.239095  735111 pod_ready.go:93] pod "kube-proxy-24q9n" in "kube-system" namespace has status "Ready":"True"
	I0916 13:38:49.239112  735111 pod_ready.go:82] duration metric: took 400.039577ms for pod "kube-proxy-24q9n" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:49.239121  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9d7kt" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:49.435296  735111 request.go:632] Waited for 196.076536ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9d7kt
	I0916 13:38:49.435369  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9d7kt
	I0916 13:38:49.435377  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:49.435391  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:49.435400  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:49.438652  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:49.634610  735111 request.go:632] Waited for 195.295347ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:38:49.634669  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:38:49.634674  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:49.634682  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:49.634685  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:49.637513  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:49.637928  735111 pod_ready.go:93] pod "kube-proxy-9d7kt" in "kube-system" namespace has status "Ready":"True"
	I0916 13:38:49.637947  735111 pod_ready.go:82] duration metric: took 398.820171ms for pod "kube-proxy-9d7kt" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:49.637955  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:49.835050  735111 request.go:632] Waited for 197.017122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-190751
	I0916 13:38:49.835113  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-190751
	I0916 13:38:49.835118  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:49.835126  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:49.835131  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:49.837981  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:50.034991  735111 request.go:632] Waited for 196.406773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:38:50.035048  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:38:50.035053  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:50.035059  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:50.035063  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:50.038370  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:50.038828  735111 pod_ready.go:93] pod "kube-scheduler-ha-190751" in "kube-system" namespace has status "Ready":"True"
	I0916 13:38:50.038845  735111 pod_ready.go:82] duration metric: took 400.884474ms for pod "kube-scheduler-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:50.038853  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:50.235000  735111 request.go:632] Waited for 196.046513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-190751-m02
	I0916 13:38:50.235060  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-190751-m02
	I0916 13:38:50.235065  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:50.235072  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:50.235076  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:50.240407  735111 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 13:38:50.435277  735111 request.go:632] Waited for 194.360733ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:50.435339  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:50.435344  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:50.435358  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:50.435364  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:50.438173  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:50.438657  735111 pod_ready.go:93] pod "kube-scheduler-ha-190751-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 13:38:50.438675  735111 pod_ready.go:82] duration metric: took 399.816261ms for pod "kube-scheduler-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:50.438685  735111 pod_ready.go:39] duration metric: took 3.19999197s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 13:38:50.438699  735111 api_server.go:52] waiting for apiserver process to appear ...
	I0916 13:38:50.438752  735111 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 13:38:50.456008  735111 api_server.go:72] duration metric: took 17.982669041s to wait for apiserver process to appear ...
	I0916 13:38:50.456030  735111 api_server.go:88] waiting for apiserver healthz status ...
	I0916 13:38:50.456054  735111 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8443/healthz ...
	I0916 13:38:50.460008  735111 api_server.go:279] https://192.168.39.94:8443/healthz returned 200:
	ok
	I0916 13:38:50.460062  735111 round_trippers.go:463] GET https://192.168.39.94:8443/version
	I0916 13:38:50.460067  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:50.460074  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:50.460079  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:50.460856  735111 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 13:38:50.460955  735111 api_server.go:141] control plane version: v1.31.1
	I0916 13:38:50.460971  735111 api_server.go:131] duration metric: took 4.934707ms to wait for apiserver health ...
	I0916 13:38:50.460978  735111 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 13:38:50.635378  735111 request.go:632] Waited for 174.309285ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods
	I0916 13:38:50.635436  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods
	I0916 13:38:50.635441  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:50.635448  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:50.635452  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:50.639465  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:50.644353  735111 system_pods.go:59] 17 kube-system pods found
	I0916 13:38:50.644386  735111 system_pods.go:61] "coredns-7c65d6cfc9-9lw8n" [19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9] Running
	I0916 13:38:50.644394  735111 system_pods.go:61] "coredns-7c65d6cfc9-gzkpj" [4e0ada83-1020-4bd4-be70-9a1a5972ff59] Running
	I0916 13:38:50.644399  735111 system_pods.go:61] "etcd-ha-190751" [be88be37-91ce-48e8-9f8b-d3103b49ba3c] Running
	I0916 13:38:50.644404  735111 system_pods.go:61] "etcd-ha-190751-m02" [12d190fd-ee89-4c15-9807-992ea738cbf8] Running
	I0916 13:38:50.644409  735111 system_pods.go:61] "kindnet-gpb96" [bb699362-acf1-471c-8b39-8a7498a7da52] Running
	I0916 13:38:50.644414  735111 system_pods.go:61] "kindnet-qfl9j" [c3185688-2626-48af-9067-60c59d3fc806] Running
	I0916 13:38:50.644419  735111 system_pods.go:61] "kube-apiserver-ha-190751" [c91fdd4e-99d4-4130-8240-0ae5f9339cd0] Running
	I0916 13:38:50.644425  735111 system_pods.go:61] "kube-apiserver-ha-190751-m02" [bdbe2c9a-88c9-468e-b902-daddcf463dad] Running
	I0916 13:38:50.644430  735111 system_pods.go:61] "kube-controller-manager-ha-190751" [fefa0f76-38b3-4138-8e0a-d9ac18bdbeac] Running
	I0916 13:38:50.644437  735111 system_pods.go:61] "kube-controller-manager-ha-190751-m02" [22abf056-bbbc-4702-aed6-60aa470bc87d] Running
	I0916 13:38:50.644444  735111 system_pods.go:61] "kube-proxy-24q9n" [12db4b5d-002f-4e38-95a1-3b12747c80a3] Running
	I0916 13:38:50.644450  735111 system_pods.go:61] "kube-proxy-9d7kt" [ba8c34d1-5931-4e70-8d01-798817397f78] Running
	I0916 13:38:50.644456  735111 system_pods.go:61] "kube-scheduler-ha-190751" [677eae56-307b-4bef-939e-5eae5b8a3fff] Running
	I0916 13:38:50.644462  735111 system_pods.go:61] "kube-scheduler-ha-190751-m02" [9c09f981-ca69-420f-87c7-2a9c6692b9d7] Running
	I0916 13:38:50.644471  735111 system_pods.go:61] "kube-vip-ha-190751" [d979d6e0-d0db-4fe1-a8e7-d8e361f20a88] Running
	I0916 13:38:50.644479  735111 system_pods.go:61] "kube-vip-ha-190751-m02" [1c08285c-dafc-45f7-b1b3-dc86bf623fde] Running
	I0916 13:38:50.644487  735111 system_pods.go:61] "storage-provisioner" [f01b81dc-2ff8-41de-8c63-e09a0ead6545] Running
	I0916 13:38:50.644495  735111 system_pods.go:74] duration metric: took 183.510256ms to wait for pod list to return data ...
	I0916 13:38:50.644507  735111 default_sa.go:34] waiting for default service account to be created ...
	I0916 13:38:50.834929  735111 request.go:632] Waited for 190.338146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/default/serviceaccounts
	I0916 13:38:50.834990  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/default/serviceaccounts
	I0916 13:38:50.834996  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:50.835004  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:50.835008  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:50.838515  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:50.838779  735111 default_sa.go:45] found service account: "default"
	I0916 13:38:50.838798  735111 default_sa.go:55] duration metric: took 194.284036ms for default service account to be created ...
	I0916 13:38:50.838808  735111 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 13:38:51.035256  735111 request.go:632] Waited for 196.366226ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods
	I0916 13:38:51.035349  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods
	I0916 13:38:51.035359  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:51.035373  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:51.035383  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:51.039582  735111 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 13:38:51.044181  735111 system_pods.go:86] 17 kube-system pods found
	I0916 13:38:51.044208  735111 system_pods.go:89] "coredns-7c65d6cfc9-9lw8n" [19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9] Running
	I0916 13:38:51.044216  735111 system_pods.go:89] "coredns-7c65d6cfc9-gzkpj" [4e0ada83-1020-4bd4-be70-9a1a5972ff59] Running
	I0916 13:38:51.044221  735111 system_pods.go:89] "etcd-ha-190751" [be88be37-91ce-48e8-9f8b-d3103b49ba3c] Running
	I0916 13:38:51.044227  735111 system_pods.go:89] "etcd-ha-190751-m02" [12d190fd-ee89-4c15-9807-992ea738cbf8] Running
	I0916 13:38:51.044232  735111 system_pods.go:89] "kindnet-gpb96" [bb699362-acf1-471c-8b39-8a7498a7da52] Running
	I0916 13:38:51.044238  735111 system_pods.go:89] "kindnet-qfl9j" [c3185688-2626-48af-9067-60c59d3fc806] Running
	I0916 13:38:51.044243  735111 system_pods.go:89] "kube-apiserver-ha-190751" [c91fdd4e-99d4-4130-8240-0ae5f9339cd0] Running
	I0916 13:38:51.044249  735111 system_pods.go:89] "kube-apiserver-ha-190751-m02" [bdbe2c9a-88c9-468e-b902-daddcf463dad] Running
	I0916 13:38:51.044259  735111 system_pods.go:89] "kube-controller-manager-ha-190751" [fefa0f76-38b3-4138-8e0a-d9ac18bdbeac] Running
	I0916 13:38:51.044270  735111 system_pods.go:89] "kube-controller-manager-ha-190751-m02" [22abf056-bbbc-4702-aed6-60aa470bc87d] Running
	I0916 13:38:51.044276  735111 system_pods.go:89] "kube-proxy-24q9n" [12db4b5d-002f-4e38-95a1-3b12747c80a3] Running
	I0916 13:38:51.044285  735111 system_pods.go:89] "kube-proxy-9d7kt" [ba8c34d1-5931-4e70-8d01-798817397f78] Running
	I0916 13:38:51.044290  735111 system_pods.go:89] "kube-scheduler-ha-190751" [677eae56-307b-4bef-939e-5eae5b8a3fff] Running
	I0916 13:38:51.044295  735111 system_pods.go:89] "kube-scheduler-ha-190751-m02" [9c09f981-ca69-420f-87c7-2a9c6692b9d7] Running
	I0916 13:38:51.044301  735111 system_pods.go:89] "kube-vip-ha-190751" [d979d6e0-d0db-4fe1-a8e7-d8e361f20a88] Running
	I0916 13:38:51.044306  735111 system_pods.go:89] "kube-vip-ha-190751-m02" [1c08285c-dafc-45f7-b1b3-dc86bf623fde] Running
	I0916 13:38:51.044314  735111 system_pods.go:89] "storage-provisioner" [f01b81dc-2ff8-41de-8c63-e09a0ead6545] Running
	I0916 13:38:51.044327  735111 system_pods.go:126] duration metric: took 205.507719ms to wait for k8s-apps to be running ...
	I0916 13:38:51.044339  735111 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 13:38:51.044389  735111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:38:51.066353  735111 system_svc.go:56] duration metric: took 22.003735ms WaitForService to wait for kubelet
	I0916 13:38:51.066383  735111 kubeadm.go:582] duration metric: took 18.593051314s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 13:38:51.066407  735111 node_conditions.go:102] verifying NodePressure condition ...
	I0916 13:38:51.234843  735111 request.go:632] Waited for 168.334045ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes
	I0916 13:38:51.234899  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes
	I0916 13:38:51.234903  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:51.234911  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:51.234916  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:51.238476  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:51.239346  735111 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 13:38:51.239378  735111 node_conditions.go:123] node cpu capacity is 2
	I0916 13:38:51.239395  735111 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 13:38:51.239400  735111 node_conditions.go:123] node cpu capacity is 2
	I0916 13:38:51.239408  735111 node_conditions.go:105] duration metric: took 172.993764ms to run NodePressure ...
	I0916 13:38:51.239469  735111 start.go:241] waiting for startup goroutines ...
	I0916 13:38:51.239512  735111 start.go:255] writing updated cluster config ...
	I0916 13:38:51.241713  735111 out.go:201] 
	I0916 13:38:51.243012  735111 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:38:51.243130  735111 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/config.json ...
	I0916 13:38:51.244505  735111 out.go:177] * Starting "ha-190751-m03" control-plane node in "ha-190751" cluster
	I0916 13:38:51.245537  735111 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 13:38:51.245555  735111 cache.go:56] Caching tarball of preloaded images
	I0916 13:38:51.245661  735111 preload.go:172] Found /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 13:38:51.245690  735111 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 13:38:51.245781  735111 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/config.json ...
	I0916 13:38:51.245930  735111 start.go:360] acquireMachinesLock for ha-190751-m03: {Name:mke8f8f8ba61009cdea7a3d88b50b9f6ae6e1362 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 13:38:51.245973  735111 start.go:364] duration metric: took 24.574µs to acquireMachinesLock for "ha-190751-m03"
	I0916 13:38:51.245996  735111 start.go:93] Provisioning new machine with config: &{Name:ha-190751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-190751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.192 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 13:38:51.246082  735111 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0916 13:38:51.247441  735111 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 13:38:51.247524  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:38:51.247560  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:38:51.262736  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42935
	I0916 13:38:51.263173  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:38:51.263642  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:38:51.263660  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:38:51.263945  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:38:51.264127  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetMachineName
	I0916 13:38:51.264232  735111 main.go:141] libmachine: (ha-190751-m03) Calling .DriverName
	I0916 13:38:51.264361  735111 start.go:159] libmachine.API.Create for "ha-190751" (driver="kvm2")
	I0916 13:38:51.264396  735111 client.go:168] LocalClient.Create starting
	I0916 13:38:51.264433  735111 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem
	I0916 13:38:51.264469  735111 main.go:141] libmachine: Decoding PEM data...
	I0916 13:38:51.264484  735111 main.go:141] libmachine: Parsing certificate...
	I0916 13:38:51.264535  735111 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem
	I0916 13:38:51.264552  735111 main.go:141] libmachine: Decoding PEM data...
	I0916 13:38:51.264562  735111 main.go:141] libmachine: Parsing certificate...
	I0916 13:38:51.264579  735111 main.go:141] libmachine: Running pre-create checks...
	I0916 13:38:51.264586  735111 main.go:141] libmachine: (ha-190751-m03) Calling .PreCreateCheck
	I0916 13:38:51.264747  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetConfigRaw
	I0916 13:38:51.265081  735111 main.go:141] libmachine: Creating machine...
	I0916 13:38:51.265094  735111 main.go:141] libmachine: (ha-190751-m03) Calling .Create
	I0916 13:38:51.265268  735111 main.go:141] libmachine: (ha-190751-m03) Creating KVM machine...
	I0916 13:38:51.266521  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found existing default KVM network
	I0916 13:38:51.266625  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found existing private KVM network mk-ha-190751
	I0916 13:38:51.266723  735111 main.go:141] libmachine: (ha-190751-m03) Setting up store path in /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03 ...
	I0916 13:38:51.266747  735111 main.go:141] libmachine: (ha-190751-m03) Building disk image from file:///home/jenkins/minikube-integration/19652-713072/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 13:38:51.266827  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:51.266719  735844 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 13:38:51.266915  735111 main.go:141] libmachine: (ha-190751-m03) Downloading /home/jenkins/minikube-integration/19652-713072/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19652-713072/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 13:38:51.537695  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:51.537521  735844 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/id_rsa...
	I0916 13:38:51.682729  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:51.682629  735844 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/ha-190751-m03.rawdisk...
	I0916 13:38:51.682756  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Writing magic tar header
	I0916 13:38:51.682769  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Writing SSH key tar header
	I0916 13:38:51.682778  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:51.682750  735844 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03 ...
	I0916 13:38:51.682886  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03
	I0916 13:38:51.682914  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072/.minikube/machines
	I0916 13:38:51.682926  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 13:38:51.682942  735111 main.go:141] libmachine: (ha-190751-m03) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03 (perms=drwx------)
	I0916 13:38:51.682963  735111 main.go:141] libmachine: (ha-190751-m03) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072/.minikube/machines (perms=drwxr-xr-x)
	I0916 13:38:51.682974  735111 main.go:141] libmachine: (ha-190751-m03) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072/.minikube (perms=drwxr-xr-x)
	I0916 13:38:51.682989  735111 main.go:141] libmachine: (ha-190751-m03) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072 (perms=drwxrwxr-x)
	I0916 13:38:51.683003  735111 main.go:141] libmachine: (ha-190751-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 13:38:51.683014  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072
	I0916 13:38:51.683026  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 13:38:51.683037  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Checking permissions on dir: /home/jenkins
	I0916 13:38:51.683047  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Checking permissions on dir: /home
	I0916 13:38:51.683057  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Skipping /home - not owner
	I0916 13:38:51.683066  735111 main.go:141] libmachine: (ha-190751-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 13:38:51.683076  735111 main.go:141] libmachine: (ha-190751-m03) Creating domain...
	I0916 13:38:51.683978  735111 main.go:141] libmachine: (ha-190751-m03) define libvirt domain using xml: 
	I0916 13:38:51.684000  735111 main.go:141] libmachine: (ha-190751-m03) <domain type='kvm'>
	I0916 13:38:51.684034  735111 main.go:141] libmachine: (ha-190751-m03)   <name>ha-190751-m03</name>
	I0916 13:38:51.684056  735111 main.go:141] libmachine: (ha-190751-m03)   <memory unit='MiB'>2200</memory>
	I0916 13:38:51.684062  735111 main.go:141] libmachine: (ha-190751-m03)   <vcpu>2</vcpu>
	I0916 13:38:51.684067  735111 main.go:141] libmachine: (ha-190751-m03)   <features>
	I0916 13:38:51.684072  735111 main.go:141] libmachine: (ha-190751-m03)     <acpi/>
	I0916 13:38:51.684078  735111 main.go:141] libmachine: (ha-190751-m03)     <apic/>
	I0916 13:38:51.684083  735111 main.go:141] libmachine: (ha-190751-m03)     <pae/>
	I0916 13:38:51.684090  735111 main.go:141] libmachine: (ha-190751-m03)     
	I0916 13:38:51.684095  735111 main.go:141] libmachine: (ha-190751-m03)   </features>
	I0916 13:38:51.684102  735111 main.go:141] libmachine: (ha-190751-m03)   <cpu mode='host-passthrough'>
	I0916 13:38:51.684106  735111 main.go:141] libmachine: (ha-190751-m03)   
	I0916 13:38:51.684111  735111 main.go:141] libmachine: (ha-190751-m03)   </cpu>
	I0916 13:38:51.684116  735111 main.go:141] libmachine: (ha-190751-m03)   <os>
	I0916 13:38:51.684120  735111 main.go:141] libmachine: (ha-190751-m03)     <type>hvm</type>
	I0916 13:38:51.684127  735111 main.go:141] libmachine: (ha-190751-m03)     <boot dev='cdrom'/>
	I0916 13:38:51.684131  735111 main.go:141] libmachine: (ha-190751-m03)     <boot dev='hd'/>
	I0916 13:38:51.684150  735111 main.go:141] libmachine: (ha-190751-m03)     <bootmenu enable='no'/>
	I0916 13:38:51.684163  735111 main.go:141] libmachine: (ha-190751-m03)   </os>
	I0916 13:38:51.684174  735111 main.go:141] libmachine: (ha-190751-m03)   <devices>
	I0916 13:38:51.684184  735111 main.go:141] libmachine: (ha-190751-m03)     <disk type='file' device='cdrom'>
	I0916 13:38:51.684201  735111 main.go:141] libmachine: (ha-190751-m03)       <source file='/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/boot2docker.iso'/>
	I0916 13:38:51.684217  735111 main.go:141] libmachine: (ha-190751-m03)       <target dev='hdc' bus='scsi'/>
	I0916 13:38:51.684227  735111 main.go:141] libmachine: (ha-190751-m03)       <readonly/>
	I0916 13:38:51.684234  735111 main.go:141] libmachine: (ha-190751-m03)     </disk>
	I0916 13:38:51.684267  735111 main.go:141] libmachine: (ha-190751-m03)     <disk type='file' device='disk'>
	I0916 13:38:51.684291  735111 main.go:141] libmachine: (ha-190751-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 13:38:51.684309  735111 main.go:141] libmachine: (ha-190751-m03)       <source file='/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/ha-190751-m03.rawdisk'/>
	I0916 13:38:51.684321  735111 main.go:141] libmachine: (ha-190751-m03)       <target dev='hda' bus='virtio'/>
	I0916 13:38:51.684329  735111 main.go:141] libmachine: (ha-190751-m03)     </disk>
	I0916 13:38:51.684340  735111 main.go:141] libmachine: (ha-190751-m03)     <interface type='network'>
	I0916 13:38:51.684352  735111 main.go:141] libmachine: (ha-190751-m03)       <source network='mk-ha-190751'/>
	I0916 13:38:51.684365  735111 main.go:141] libmachine: (ha-190751-m03)       <model type='virtio'/>
	I0916 13:38:51.684376  735111 main.go:141] libmachine: (ha-190751-m03)     </interface>
	I0916 13:38:51.684386  735111 main.go:141] libmachine: (ha-190751-m03)     <interface type='network'>
	I0916 13:38:51.684393  735111 main.go:141] libmachine: (ha-190751-m03)       <source network='default'/>
	I0916 13:38:51.684401  735111 main.go:141] libmachine: (ha-190751-m03)       <model type='virtio'/>
	I0916 13:38:51.684410  735111 main.go:141] libmachine: (ha-190751-m03)     </interface>
	I0916 13:38:51.684420  735111 main.go:141] libmachine: (ha-190751-m03)     <serial type='pty'>
	I0916 13:38:51.684432  735111 main.go:141] libmachine: (ha-190751-m03)       <target port='0'/>
	I0916 13:38:51.684446  735111 main.go:141] libmachine: (ha-190751-m03)     </serial>
	I0916 13:38:51.684454  735111 main.go:141] libmachine: (ha-190751-m03)     <console type='pty'>
	I0916 13:38:51.684459  735111 main.go:141] libmachine: (ha-190751-m03)       <target type='serial' port='0'/>
	I0916 13:38:51.684466  735111 main.go:141] libmachine: (ha-190751-m03)     </console>
	I0916 13:38:51.684473  735111 main.go:141] libmachine: (ha-190751-m03)     <rng model='virtio'>
	I0916 13:38:51.684481  735111 main.go:141] libmachine: (ha-190751-m03)       <backend model='random'>/dev/random</backend>
	I0916 13:38:51.684486  735111 main.go:141] libmachine: (ha-190751-m03)     </rng>
	I0916 13:38:51.684493  735111 main.go:141] libmachine: (ha-190751-m03)     
	I0916 13:38:51.684497  735111 main.go:141] libmachine: (ha-190751-m03)     
	I0916 13:38:51.684501  735111 main.go:141] libmachine: (ha-190751-m03)   </devices>
	I0916 13:38:51.684506  735111 main.go:141] libmachine: (ha-190751-m03) </domain>
	I0916 13:38:51.684528  735111 main.go:141] libmachine: (ha-190751-m03) 
	I0916 13:38:51.690532  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:87:7b:da in network default
	I0916 13:38:51.692006  735111 main.go:141] libmachine: (ha-190751-m03) Ensuring networks are active...
	I0916 13:38:51.692023  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:38:51.692718  735111 main.go:141] libmachine: (ha-190751-m03) Ensuring network default is active
	I0916 13:38:51.693016  735111 main.go:141] libmachine: (ha-190751-m03) Ensuring network mk-ha-190751 is active
	I0916 13:38:51.693413  735111 main.go:141] libmachine: (ha-190751-m03) Getting domain xml...
	I0916 13:38:51.694149  735111 main.go:141] libmachine: (ha-190751-m03) Creating domain...
	I0916 13:38:52.898349  735111 main.go:141] libmachine: (ha-190751-m03) Waiting to get IP...
	I0916 13:38:52.899012  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:38:52.899379  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:38:52.899459  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:52.899379  735844 retry.go:31] will retry after 267.73261ms: waiting for machine to come up
	I0916 13:38:53.168962  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:38:53.169450  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:38:53.169477  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:53.169397  735844 retry.go:31] will retry after 355.778778ms: waiting for machine to come up
	I0916 13:38:53.527048  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:38:53.527444  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:38:53.527475  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:53.527403  735844 retry.go:31] will retry after 429.135107ms: waiting for machine to come up
	I0916 13:38:53.958061  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:38:53.958483  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:38:53.958507  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:53.958433  735844 retry.go:31] will retry after 431.318286ms: waiting for machine to come up
	I0916 13:38:54.391723  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:38:54.392132  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:38:54.392154  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:54.392075  735844 retry.go:31] will retry after 601.011895ms: waiting for machine to come up
	I0916 13:38:54.994478  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:38:54.994857  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:38:54.994885  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:54.994816  735844 retry.go:31] will retry after 853.395587ms: waiting for machine to come up
	I0916 13:38:55.849861  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:38:55.850269  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:38:55.850295  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:55.850218  735844 retry.go:31] will retry after 1.068824601s: waiting for machine to come up
	I0916 13:38:56.920153  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:38:56.920525  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:38:56.920556  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:56.920497  735844 retry.go:31] will retry after 1.007149511s: waiting for machine to come up
	I0916 13:38:57.929630  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:38:57.930174  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:38:57.930196  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:57.930118  735844 retry.go:31] will retry after 1.469842637s: waiting for machine to come up
	I0916 13:38:59.401026  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:38:59.401415  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:38:59.401440  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:59.401380  735844 retry.go:31] will retry after 2.104821665s: waiting for machine to come up
	I0916 13:39:01.507676  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:01.508197  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:39:01.508228  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:39:01.508132  735844 retry.go:31] will retry after 2.346855381s: waiting for machine to come up
	I0916 13:39:03.857755  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:03.858275  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:39:03.858329  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:39:03.858228  735844 retry.go:31] will retry after 3.255293037s: waiting for machine to come up
	I0916 13:39:07.114891  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:07.115304  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:39:07.115323  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:39:07.115261  735844 retry.go:31] will retry after 3.528582737s: waiting for machine to come up
	I0916 13:39:10.646649  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:10.647143  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:39:10.647171  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:39:10.647092  735844 retry.go:31] will retry after 3.488162223s: waiting for machine to come up
	I0916 13:39:14.138431  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:14.138871  735111 main.go:141] libmachine: (ha-190751-m03) Found IP for machine: 192.168.39.134
	I0916 13:39:14.138913  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has current primary IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:14.138922  735111 main.go:141] libmachine: (ha-190751-m03) Reserving static IP address...
	I0916 13:39:14.139293  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find host DHCP lease matching {name: "ha-190751-m03", mac: "52:54:00:0e:4e:0a", ip: "192.168.39.134"} in network mk-ha-190751
	I0916 13:39:14.210728  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Getting to WaitForSSH function...
	I0916 13:39:14.210765  735111 main.go:141] libmachine: (ha-190751-m03) Reserved static IP address: 192.168.39.134
	I0916 13:39:14.210775  735111 main.go:141] libmachine: (ha-190751-m03) Waiting for SSH to be available...
	I0916 13:39:14.213475  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:14.213855  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751
	I0916 13:39:14.213886  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find defined IP address of network mk-ha-190751 interface with MAC address 52:54:00:0e:4e:0a
	I0916 13:39:14.214225  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Using SSH client type: external
	I0916 13:39:14.214252  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/id_rsa (-rw-------)
	I0916 13:39:14.214278  735111 main.go:141] libmachine: (ha-190751-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 13:39:14.214293  735111 main.go:141] libmachine: (ha-190751-m03) DBG | About to run SSH command:
	I0916 13:39:14.214314  735111 main.go:141] libmachine: (ha-190751-m03) DBG | exit 0
	I0916 13:39:14.217901  735111 main.go:141] libmachine: (ha-190751-m03) DBG | SSH cmd err, output: exit status 255: 
	I0916 13:39:14.217926  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0916 13:39:14.217942  735111 main.go:141] libmachine: (ha-190751-m03) DBG | command : exit 0
	I0916 13:39:14.217953  735111 main.go:141] libmachine: (ha-190751-m03) DBG | err     : exit status 255
	I0916 13:39:14.217965  735111 main.go:141] libmachine: (ha-190751-m03) DBG | output  : 
	I0916 13:39:17.218981  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Getting to WaitForSSH function...
	I0916 13:39:17.221212  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.221595  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:17.221616  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.221784  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Using SSH client type: external
	I0916 13:39:17.221810  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/id_rsa (-rw-------)
	I0916 13:39:17.221840  735111 main.go:141] libmachine: (ha-190751-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.134 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 13:39:17.221856  735111 main.go:141] libmachine: (ha-190751-m03) DBG | About to run SSH command:
	I0916 13:39:17.221869  735111 main.go:141] libmachine: (ha-190751-m03) DBG | exit 0
	I0916 13:39:17.349568  735111 main.go:141] libmachine: (ha-190751-m03) DBG | SSH cmd err, output: <nil>: 
	I0916 13:39:17.349894  735111 main.go:141] libmachine: (ha-190751-m03) KVM machine creation complete!
	I0916 13:39:17.350159  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetConfigRaw
	I0916 13:39:17.350743  735111 main.go:141] libmachine: (ha-190751-m03) Calling .DriverName
	I0916 13:39:17.350919  735111 main.go:141] libmachine: (ha-190751-m03) Calling .DriverName
	I0916 13:39:17.351092  735111 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 13:39:17.351104  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetState
	I0916 13:39:17.352188  735111 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 13:39:17.352202  735111 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 13:39:17.352209  735111 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 13:39:17.352216  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:39:17.354508  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.354845  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:17.354884  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.355038  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:39:17.355191  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:17.355357  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:17.355512  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:39:17.355653  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:39:17.355852  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0916 13:39:17.355863  735111 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 13:39:17.456888  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 13:39:17.456915  735111 main.go:141] libmachine: Detecting the provisioner...
	I0916 13:39:17.456924  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:39:17.459979  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.460495  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:17.460524  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.460810  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:39:17.461011  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:17.461160  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:17.461326  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:39:17.461494  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:39:17.461705  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0916 13:39:17.461719  735111 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 13:39:17.562014  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 13:39:17.562068  735111 main.go:141] libmachine: found compatible host: buildroot
	I0916 13:39:17.562074  735111 main.go:141] libmachine: Provisioning with buildroot...
	I0916 13:39:17.562082  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetMachineName
	I0916 13:39:17.562340  735111 buildroot.go:166] provisioning hostname "ha-190751-m03"
	I0916 13:39:17.562369  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetMachineName
	I0916 13:39:17.562584  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:39:17.564921  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.565281  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:17.565303  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.565406  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:39:17.565575  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:17.565742  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:17.565889  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:39:17.566033  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:39:17.566231  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0916 13:39:17.566243  735111 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-190751-m03 && echo "ha-190751-m03" | sudo tee /etc/hostname
	I0916 13:39:17.684851  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-190751-m03
	
	I0916 13:39:17.684884  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:39:17.687807  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.688158  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:17.688188  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.688334  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:39:17.688504  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:17.688667  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:17.688820  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:39:17.688969  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:39:17.689174  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0916 13:39:17.689191  735111 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-190751-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-190751-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-190751-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 13:39:17.798755  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 13:39:17.798787  735111 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19652-713072/.minikube CaCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19652-713072/.minikube}
	I0916 13:39:17.798807  735111 buildroot.go:174] setting up certificates
	I0916 13:39:17.798821  735111 provision.go:84] configureAuth start
	I0916 13:39:17.798834  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetMachineName
	I0916 13:39:17.799097  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetIP
	I0916 13:39:17.801945  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.802390  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:17.802418  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.802614  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:39:17.804893  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.805203  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:17.805231  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.805352  735111 provision.go:143] copyHostCerts
	I0916 13:39:17.805387  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem
	I0916 13:39:17.805422  735111 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem, removing ...
	I0916 13:39:17.805430  735111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem
	I0916 13:39:17.805514  735111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem (1082 bytes)
	I0916 13:39:17.805613  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem
	I0916 13:39:17.805639  735111 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem, removing ...
	I0916 13:39:17.805647  735111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem
	I0916 13:39:17.805701  735111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem (1123 bytes)
	I0916 13:39:17.805770  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem
	I0916 13:39:17.805793  735111 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem, removing ...
	I0916 13:39:17.805802  735111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem
	I0916 13:39:17.805836  735111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem (1679 bytes)
	I0916 13:39:17.805906  735111 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem org=jenkins.ha-190751-m03 san=[127.0.0.1 192.168.39.134 ha-190751-m03 localhost minikube]
	I0916 13:39:17.870032  735111 provision.go:177] copyRemoteCerts
	I0916 13:39:17.870099  735111 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 13:39:17.870126  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:39:17.872522  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.872837  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:17.872864  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.872980  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:39:17.873152  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:17.873300  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:39:17.873438  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/id_rsa Username:docker}
	I0916 13:39:17.955555  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 13:39:17.955635  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 13:39:17.978952  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 13:39:17.979009  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 13:39:18.001031  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 13:39:18.001082  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 13:39:18.023641  735111 provision.go:87] duration metric: took 224.805023ms to configureAuth
	I0916 13:39:18.023667  735111 buildroot.go:189] setting minikube options for container-runtime
	I0916 13:39:18.023847  735111 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:39:18.023917  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:39:18.026697  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.027058  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:18.027085  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.027295  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:39:18.027491  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:18.027638  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:18.027736  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:39:18.027854  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:39:18.027999  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0916 13:39:18.028012  735111 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 13:39:18.253860  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 13:39:18.253896  735111 main.go:141] libmachine: Checking connection to Docker...
	I0916 13:39:18.253908  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetURL
	I0916 13:39:18.255174  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Using libvirt version 6000000
	I0916 13:39:18.257182  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.257566  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:18.257598  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.257788  735111 main.go:141] libmachine: Docker is up and running!
	I0916 13:39:18.257804  735111 main.go:141] libmachine: Reticulating splines...
	I0916 13:39:18.257812  735111 client.go:171] duration metric: took 26.993406027s to LocalClient.Create
	I0916 13:39:18.257839  735111 start.go:167] duration metric: took 26.993482617s to libmachine.API.Create "ha-190751"
	I0916 13:39:18.257849  735111 start.go:293] postStartSetup for "ha-190751-m03" (driver="kvm2")
	I0916 13:39:18.257862  735111 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 13:39:18.257880  735111 main.go:141] libmachine: (ha-190751-m03) Calling .DriverName
	I0916 13:39:18.258114  735111 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 13:39:18.258140  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:39:18.260112  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.260396  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:18.260424  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.260534  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:39:18.260698  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:18.260863  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:39:18.261006  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/id_rsa Username:docker}
	I0916 13:39:18.339569  735111 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 13:39:18.343728  735111 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 13:39:18.343755  735111 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/addons for local assets ...
	I0916 13:39:18.343830  735111 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/files for local assets ...
	I0916 13:39:18.343929  735111 filesync.go:149] local asset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> 7205442.pem in /etc/ssl/certs
	I0916 13:39:18.343942  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> /etc/ssl/certs/7205442.pem
	I0916 13:39:18.344054  735111 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 13:39:18.352825  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 13:39:18.375620  735111 start.go:296] duration metric: took 117.756033ms for postStartSetup
	I0916 13:39:18.375681  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetConfigRaw
	I0916 13:39:18.376309  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetIP
	I0916 13:39:18.378881  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.379283  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:18.379309  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.379598  735111 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/config.json ...
	I0916 13:39:18.379820  735111 start.go:128] duration metric: took 27.133726733s to createHost
	I0916 13:39:18.379844  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:39:18.382112  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.382511  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:18.382542  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.382687  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:39:18.382870  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:18.383030  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:18.383189  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:39:18.383366  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:39:18.383580  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0916 13:39:18.383591  735111 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 13:39:18.486014  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726493958.465865548
	
	I0916 13:39:18.486040  735111 fix.go:216] guest clock: 1726493958.465865548
	I0916 13:39:18.486049  735111 fix.go:229] Guest: 2024-09-16 13:39:18.465865548 +0000 UTC Remote: 2024-09-16 13:39:18.379833761 +0000 UTC m=+141.735737766 (delta=86.031787ms)
	I0916 13:39:18.486069  735111 fix.go:200] guest clock delta is within tolerance: 86.031787ms
	I0916 13:39:18.486076  735111 start.go:83] releasing machines lock for "ha-190751-m03", held for 27.240091901s
	I0916 13:39:18.486100  735111 main.go:141] libmachine: (ha-190751-m03) Calling .DriverName
	I0916 13:39:18.486351  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetIP
	I0916 13:39:18.488910  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.489269  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:18.489293  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.491335  735111 out.go:177] * Found network options:
	I0916 13:39:18.492394  735111 out.go:177]   - NO_PROXY=192.168.39.94,192.168.39.192
	W0916 13:39:18.493519  735111 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 13:39:18.493541  735111 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 13:39:18.493559  735111 main.go:141] libmachine: (ha-190751-m03) Calling .DriverName
	I0916 13:39:18.494017  735111 main.go:141] libmachine: (ha-190751-m03) Calling .DriverName
	I0916 13:39:18.494160  735111 main.go:141] libmachine: (ha-190751-m03) Calling .DriverName
	I0916 13:39:18.494258  735111 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 13:39:18.494291  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	W0916 13:39:18.494369  735111 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 13:39:18.494391  735111 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 13:39:18.494456  735111 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 13:39:18.494476  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:39:18.496983  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.497179  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.497422  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:18.497444  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.497573  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:18.497589  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:39:18.497592  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.497762  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:18.497774  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:39:18.497943  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:18.497959  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:39:18.498092  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:39:18.498128  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/id_rsa Username:docker}
	I0916 13:39:18.498215  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/id_rsa Username:docker}
	I0916 13:39:18.737954  735111 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 13:39:18.744923  735111 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 13:39:18.745001  735111 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 13:39:18.764476  735111 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 13:39:18.764503  735111 start.go:495] detecting cgroup driver to use...
	I0916 13:39:18.764573  735111 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 13:39:18.781234  735111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 13:39:18.794933  735111 docker.go:217] disabling cri-docker service (if available) ...
	I0916 13:39:18.794980  735111 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 13:39:18.808632  735111 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 13:39:18.821849  735111 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 13:39:18.942168  735111 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 13:39:19.094357  735111 docker.go:233] disabling docker service ...
	I0916 13:39:19.094418  735111 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 13:39:19.112538  735111 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 13:39:19.125554  735111 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 13:39:19.260134  735111 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 13:39:19.379363  735111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 13:39:19.393121  735111 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 13:39:19.410931  735111 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 13:39:19.411005  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:39:19.421424  735111 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 13:39:19.421473  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:39:19.431135  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:39:19.440675  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:39:19.451628  735111 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 13:39:19.462860  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:39:19.474046  735111 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:39:19.490880  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:39:19.501369  735111 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 13:39:19.510937  735111 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 13:39:19.510976  735111 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 13:39:19.523965  735111 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 13:39:19.533361  735111 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 13:39:19.658818  735111 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 13:39:19.752488  735111 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 13:39:19.752550  735111 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 13:39:19.757903  735111 start.go:563] Will wait 60s for crictl version
	I0916 13:39:19.757956  735111 ssh_runner.go:195] Run: which crictl
	I0916 13:39:19.762158  735111 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 13:39:19.799468  735111 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 13:39:19.799536  735111 ssh_runner.go:195] Run: crio --version
	I0916 13:39:19.826239  735111 ssh_runner.go:195] Run: crio --version
	I0916 13:39:19.853266  735111 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 13:39:19.854407  735111 out.go:177]   - env NO_PROXY=192.168.39.94
	I0916 13:39:19.855494  735111 out.go:177]   - env NO_PROXY=192.168.39.94,192.168.39.192
	I0916 13:39:19.856378  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetIP
	I0916 13:39:19.858923  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:19.859322  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:19.859348  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:19.859587  735111 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 13:39:19.863498  735111 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 13:39:19.875330  735111 mustload.go:65] Loading cluster: ha-190751
	I0916 13:39:19.875549  735111 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:39:19.875792  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:39:19.875829  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:39:19.890796  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46059
	I0916 13:39:19.891172  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:39:19.891639  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:39:19.891659  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:39:19.891993  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:39:19.892178  735111 main.go:141] libmachine: (ha-190751) Calling .GetState
	I0916 13:39:19.893735  735111 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:39:19.894037  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:39:19.894075  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:39:19.908285  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36371
	I0916 13:39:19.908780  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:39:19.909236  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:39:19.909259  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:39:19.909576  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:39:19.909803  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:39:19.909978  735111 certs.go:68] Setting up /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751 for IP: 192.168.39.134
	I0916 13:39:19.909990  735111 certs.go:194] generating shared ca certs ...
	I0916 13:39:19.910004  735111 certs.go:226] acquiring lock for ca certs: {Name:mk25b35916ff3ff3777938e3e2b7794965f8a707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:39:19.910128  735111 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key
	I0916 13:39:19.910172  735111 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key
	I0916 13:39:19.910183  735111 certs.go:256] generating profile certs ...
	I0916 13:39:19.910268  735111 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.key
	I0916 13:39:19.910294  735111 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.4e817689
	I0916 13:39:19.910319  735111 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.4e817689 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.94 192.168.39.192 192.168.39.134 192.168.39.254]
	I0916 13:39:20.158258  735111 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.4e817689 ...
	I0916 13:39:20.158304  735111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.4e817689: {Name:mk8e75c47c0b8af5b7deff3b98169e4c7bff2c28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:39:20.158501  735111 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.4e817689 ...
	I0916 13:39:20.158515  735111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.4e817689: {Name:mk2b6257004806042da85fdc625bc8844312e657 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:39:20.158595  735111 certs.go:381] copying /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.4e817689 -> /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt
	I0916 13:39:20.158739  735111 certs.go:385] copying /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.4e817689 -> /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key
	I0916 13:39:20.158881  735111 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key
	I0916 13:39:20.158898  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 13:39:20.158913  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 13:39:20.158929  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 13:39:20.158944  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 13:39:20.158959  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 13:39:20.158974  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 13:39:20.158989  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 13:39:20.173756  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 13:39:20.173838  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem (1338 bytes)
	W0916 13:39:20.173877  735111 certs.go:480] ignoring /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544_empty.pem, impossibly tiny 0 bytes
	I0916 13:39:20.173890  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 13:39:20.173914  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem (1082 bytes)
	I0916 13:39:20.173940  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem (1123 bytes)
	I0916 13:39:20.173964  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem (1679 bytes)
	I0916 13:39:20.174009  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 13:39:20.174039  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> /usr/share/ca-certificates/7205442.pem
	I0916 13:39:20.174057  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:39:20.174074  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem -> /usr/share/ca-certificates/720544.pem
	I0916 13:39:20.174121  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:39:20.177038  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:39:20.177466  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:39:20.177488  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:39:20.177715  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:39:20.177922  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:39:20.178082  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:39:20.178224  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:39:20.253980  735111 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 13:39:20.260424  735111 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 13:39:20.272373  735111 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 13:39:20.276772  735111 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0916 13:39:20.291797  735111 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 13:39:20.295875  735111 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 13:39:20.306292  735111 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 13:39:20.310789  735111 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0916 13:39:20.320754  735111 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 13:39:20.324536  735111 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 13:39:20.334814  735111 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 13:39:20.338783  735111 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0916 13:39:20.352083  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 13:39:20.380259  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 13:39:20.406780  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 13:39:20.429266  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 13:39:20.452746  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0916 13:39:20.476085  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 13:39:20.498261  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 13:39:20.520565  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 13:39:20.543260  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /usr/share/ca-certificates/7205442.pem (1708 bytes)
	I0916 13:39:20.566634  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 13:39:20.591982  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem --> /usr/share/ca-certificates/720544.pem (1338 bytes)
	I0916 13:39:20.617886  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 13:39:20.636903  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0916 13:39:20.655894  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 13:39:20.673701  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0916 13:39:20.691307  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 13:39:20.708148  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0916 13:39:20.725684  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 13:39:20.741649  735111 ssh_runner.go:195] Run: openssl version
	I0916 13:39:20.747350  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7205442.pem && ln -fs /usr/share/ca-certificates/7205442.pem /etc/ssl/certs/7205442.pem"
	I0916 13:39:20.757640  735111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7205442.pem
	I0916 13:39:20.762088  735111 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 13:33 /usr/share/ca-certificates/7205442.pem
	I0916 13:39:20.762145  735111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7205442.pem
	I0916 13:39:20.768483  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7205442.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 13:39:20.778516  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 13:39:20.788315  735111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:39:20.792414  735111 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 12:53 /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:39:20.792463  735111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:39:20.797561  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 13:39:20.807429  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/720544.pem && ln -fs /usr/share/ca-certificates/720544.pem /etc/ssl/certs/720544.pem"
	I0916 13:39:20.817363  735111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/720544.pem
	I0916 13:39:20.821541  735111 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 13:33 /usr/share/ca-certificates/720544.pem
	I0916 13:39:20.821587  735111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/720544.pem
	I0916 13:39:20.826869  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/720544.pem /etc/ssl/certs/51391683.0"
	I0916 13:39:20.836683  735111 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 13:39:20.840506  735111 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 13:39:20.840560  735111 kubeadm.go:934] updating node {m03 192.168.39.134 8443 v1.31.1 crio true true} ...
	I0916 13:39:20.840651  735111 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-190751-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-190751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 13:39:20.840686  735111 kube-vip.go:115] generating kube-vip config ...
	I0916 13:39:20.840723  735111 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 13:39:20.855049  735111 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 13:39:20.855113  735111 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 13:39:20.855153  735111 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 13:39:20.864429  735111 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 13:39:20.864470  735111 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 13:39:20.873475  735111 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 13:39:20.873499  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 13:39:20.873510  735111 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0916 13:39:20.873529  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 13:39:20.873556  735111 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 13:39:20.873573  735111 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 13:39:20.873577  735111 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0916 13:39:20.873617  735111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:39:20.891619  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 13:39:20.891623  735111 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0916 13:39:20.891655  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 13:39:20.891661  735111 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0916 13:39:20.891681  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 13:39:20.891696  735111 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 13:39:20.906537  735111 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0916 13:39:20.906560  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 13:39:21.709406  735111 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 13:39:21.719559  735111 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0916 13:39:21.736248  735111 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 13:39:21.753899  735111 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0916 13:39:21.770439  735111 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 13:39:21.774406  735111 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 13:39:21.787696  735111 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 13:39:21.922137  735111 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 13:39:21.938877  735111 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:39:21.939219  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:39:21.939287  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:39:21.955161  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38621
	I0916 13:39:21.955639  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:39:21.956110  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:39:21.956129  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:39:21.956492  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:39:21.956670  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:39:21.956836  735111 start.go:317] joinCluster: &{Name:ha-190751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-190751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.192 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 13:39:21.957003  735111 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 13:39:21.957020  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:39:21.959985  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:39:21.960436  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:39:21.960456  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:39:21.960607  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:39:21.960762  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:39:21.960900  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:39:21.961045  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:39:22.126228  735111 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 13:39:22.126281  735111 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ka76j2.9amzatrp4hsrar4a --discovery-token-ca-cert-hash sha256:40463d1766828cd98d0b3d82eb62b65ad46ddd558da2fd9e3536672d6eade3c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-190751-m03 --control-plane --apiserver-advertise-address=192.168.39.134 --apiserver-bind-port=8443"
	I0916 13:39:45.289639  735111 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ka76j2.9amzatrp4hsrar4a --discovery-token-ca-cert-hash sha256:40463d1766828cd98d0b3d82eb62b65ad46ddd558da2fd9e3536672d6eade3c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-190751-m03 --control-plane --apiserver-advertise-address=192.168.39.134 --apiserver-bind-port=8443": (23.163318972s)
	I0916 13:39:45.289714  735111 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 13:39:45.783946  735111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-190751-m03 minikube.k8s.io/updated_at=2024_09_16T13_39_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86 minikube.k8s.io/name=ha-190751 minikube.k8s.io/primary=false
	I0916 13:39:45.960776  735111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-190751-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 13:39:46.095266  735111 start.go:319] duration metric: took 24.138422609s to joinCluster
	I0916 13:39:46.095373  735111 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 13:39:46.095694  735111 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:39:46.096702  735111 out.go:177] * Verifying Kubernetes components...
	I0916 13:39:46.097722  735111 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 13:39:46.369679  735111 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 13:39:46.407374  735111 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19652-713072/kubeconfig
	I0916 13:39:46.407727  735111 kapi.go:59] client config for ha-190751: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.crt", KeyFile:"/home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.key", CAFile:"/home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 13:39:46.407816  735111 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.94:8443
	I0916 13:39:46.408144  735111 node_ready.go:35] waiting up to 6m0s for node "ha-190751-m03" to be "Ready" ...
	I0916 13:39:46.408241  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:46.408250  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:46.408263  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:46.408274  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:46.411667  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:46.908463  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:46.908493  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:46.908507  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:46.908515  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:46.911963  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:47.408903  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:47.408934  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:47.408944  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:47.408951  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:47.413413  735111 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 13:39:47.909411  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:47.909432  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:47.909441  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:47.909445  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:47.913196  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:48.409224  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:48.409244  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:48.409253  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:48.409260  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:48.412020  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:39:48.412635  735111 node_ready.go:53] node "ha-190751-m03" has status "Ready":"False"
	I0916 13:39:48.909014  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:48.909042  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:48.909054  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:48.909059  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:48.912923  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:49.409193  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:49.409216  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:49.409224  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:49.409228  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:49.412619  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:49.909078  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:49.909099  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:49.909107  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:49.909119  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:49.911692  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:39:50.409259  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:50.409281  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:50.409289  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:50.409295  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:50.412356  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:50.413278  735111 node_ready.go:53] node "ha-190751-m03" has status "Ready":"False"
	I0916 13:39:50.908598  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:50.908623  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:50.908634  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:50.908639  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:50.911506  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:39:51.408413  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:51.408442  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:51.408454  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:51.408462  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:51.411596  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:51.909366  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:51.909389  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:51.909400  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:51.909410  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:51.912625  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:52.409358  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:52.409379  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:52.409387  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:52.409390  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:52.412509  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:52.908543  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:52.908574  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:52.908586  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:52.908593  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:52.912433  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:52.913241  735111 node_ready.go:53] node "ha-190751-m03" has status "Ready":"False"
	I0916 13:39:53.408433  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:53.408459  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:53.408472  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:53.408477  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:53.411673  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:53.908627  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:53.908650  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:53.908659  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:53.908664  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:53.912236  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:54.409247  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:54.409272  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:54.409283  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:54.409290  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:54.412057  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:39:54.908305  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:54.908331  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:54.908340  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:54.908346  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:54.911667  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:55.408456  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:55.408483  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:55.408495  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:55.408501  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:55.411755  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:55.412338  735111 node_ready.go:53] node "ha-190751-m03" has status "Ready":"False"
	I0916 13:39:55.908684  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:55.908707  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:55.908717  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:55.908722  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:55.912000  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:56.409340  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:56.409367  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:56.409377  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:56.409381  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:56.412662  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:56.908456  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:56.908487  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:56.908496  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:56.908500  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:56.912441  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:57.408340  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:57.408367  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:57.408376  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:57.408380  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:57.411606  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:57.909190  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:57.909215  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:57.909222  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:57.909226  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:57.912661  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:57.913318  735111 node_ready.go:53] node "ha-190751-m03" has status "Ready":"False"
	I0916 13:39:58.408607  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:58.408634  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:58.408645  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:58.408650  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:58.412662  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:58.909100  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:58.909121  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:58.909130  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:58.909134  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:58.912004  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:39:59.409198  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:59.409236  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:59.409247  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:59.409260  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:59.412639  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:59.908996  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:59.909015  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:59.909023  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:59.909027  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:59.912302  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:00.408791  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:40:00.408817  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:00.408827  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:00.408831  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:00.412656  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:00.413347  735111 node_ready.go:49] node "ha-190751-m03" has status "Ready":"True"
	I0916 13:40:00.413365  735111 node_ready.go:38] duration metric: took 14.005200684s for node "ha-190751-m03" to be "Ready" ...
	I0916 13:40:00.413374  735111 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 13:40:00.413449  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods
	I0916 13:40:00.413458  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:00.413466  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:00.413471  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:00.418583  735111 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 13:40:00.427420  735111 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9lw8n" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:00.427521  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw8n
	I0916 13:40:00.427529  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:00.427537  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:00.427540  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:00.432360  735111 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 13:40:00.433633  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:40:00.433650  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:00.433658  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:00.433664  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:00.436286  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:40:00.436801  735111 pod_ready.go:93] pod "coredns-7c65d6cfc9-9lw8n" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:00.436824  735111 pod_ready.go:82] duration metric: took 9.372689ms for pod "coredns-7c65d6cfc9-9lw8n" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:00.436837  735111 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gzkpj" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:00.436923  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-gzkpj
	I0916 13:40:00.436936  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:00.436953  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:00.436962  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:00.439778  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:40:00.440560  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:40:00.440580  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:00.440591  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:00.440599  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:00.443192  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:40:00.443706  735111 pod_ready.go:93] pod "coredns-7c65d6cfc9-gzkpj" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:00.443721  735111 pod_ready.go:82] duration metric: took 6.871006ms for pod "coredns-7c65d6cfc9-gzkpj" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:00.443730  735111 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:00.443780  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/etcd-ha-190751
	I0916 13:40:00.443786  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:00.443794  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:00.443800  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:00.447753  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:00.448371  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:40:00.448386  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:00.448394  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:00.448399  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:00.451314  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:40:00.451831  735111 pod_ready.go:93] pod "etcd-ha-190751" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:00.451850  735111 pod_ready.go:82] duration metric: took 8.114775ms for pod "etcd-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:00.451860  735111 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:00.451926  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/etcd-ha-190751-m02
	I0916 13:40:00.451933  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:00.451941  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:00.451948  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:00.454389  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:40:00.454905  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:40:00.454919  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:00.454928  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:00.454934  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:00.457592  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:40:00.458235  735111 pod_ready.go:93] pod "etcd-ha-190751-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:00.458256  735111 pod_ready.go:82] duration metric: took 6.386626ms for pod "etcd-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:00.458267  735111 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-190751-m03" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:00.609688  735111 request.go:632] Waited for 151.317138ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/etcd-ha-190751-m03
	I0916 13:40:00.609805  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/etcd-ha-190751-m03
	I0916 13:40:00.609819  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:00.609831  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:00.609840  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:00.612852  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:40:00.809297  735111 request.go:632] Waited for 195.380467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:40:00.809375  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:40:00.809387  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:00.809398  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:00.809406  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:00.812844  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:00.813633  735111 pod_ready.go:93] pod "etcd-ha-190751-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:00.813655  735111 pod_ready.go:82] duration metric: took 355.380709ms for pod "etcd-ha-190751-m03" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:00.813698  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:01.009712  735111 request.go:632] Waited for 195.903853ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-190751
	I0916 13:40:01.009809  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-190751
	I0916 13:40:01.009823  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:01.009834  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:01.009844  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:01.013414  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:01.209519  735111 request.go:632] Waited for 195.355467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:40:01.209596  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:40:01.209603  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:01.209613  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:01.209631  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:01.212826  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:01.213480  735111 pod_ready.go:93] pod "kube-apiserver-ha-190751" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:01.213498  735111 pod_ready.go:82] duration metric: took 399.791444ms for pod "kube-apiserver-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:01.213508  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:01.409070  735111 request.go:632] Waited for 195.469232ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-190751-m02
	I0916 13:40:01.409150  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-190751-m02
	I0916 13:40:01.409155  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:01.409162  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:01.409167  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:01.412916  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:01.609652  735111 request.go:632] Waited for 196.037799ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:40:01.609739  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:40:01.609746  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:01.609762  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:01.609769  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:01.613056  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:01.613647  735111 pod_ready.go:93] pod "kube-apiserver-ha-190751-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:01.613735  735111 pod_ready.go:82] duration metric: took 400.154129ms for pod "kube-apiserver-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:01.613761  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-190751-m03" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:01.809249  735111 request.go:632] Waited for 195.381651ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-190751-m03
	I0916 13:40:01.809338  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-190751-m03
	I0916 13:40:01.809350  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:01.809361  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:01.809369  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:01.813210  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:02.009415  735111 request.go:632] Waited for 195.344265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:40:02.009525  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:40:02.009535  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:02.009550  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:02.009562  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:02.013296  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:02.013804  735111 pod_ready.go:93] pod "kube-apiserver-ha-190751-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:02.013826  735111 pod_ready.go:82] duration metric: took 400.056603ms for pod "kube-apiserver-ha-190751-m03" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:02.013836  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:02.208873  735111 request.go:632] Waited for 194.922455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-190751
	I0916 13:40:02.208954  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-190751
	I0916 13:40:02.208961  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:02.208972  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:02.208984  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:02.212385  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:02.409470  735111 request.go:632] Waited for 196.297466ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:40:02.409545  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:40:02.409588  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:02.409602  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:02.409612  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:02.412884  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:02.413456  735111 pod_ready.go:93] pod "kube-controller-manager-ha-190751" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:02.413477  735111 pod_ready.go:82] duration metric: took 399.634196ms for pod "kube-controller-manager-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:02.413491  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:02.609618  735111 request.go:632] Waited for 196.019413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-190751-m02
	I0916 13:40:02.609782  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-190751-m02
	I0916 13:40:02.609798  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:02.609809  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:02.609817  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:02.613405  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:02.809452  735111 request.go:632] Waited for 194.909335ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:40:02.809554  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:40:02.809563  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:02.809573  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:02.809583  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:02.813724  735111 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 13:40:02.814447  735111 pod_ready.go:93] pod "kube-controller-manager-ha-190751-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:02.814471  735111 pod_ready.go:82] duration metric: took 400.970352ms for pod "kube-controller-manager-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:02.814482  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-190751-m03" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:03.009529  735111 request.go:632] Waited for 194.967581ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-190751-m03
	I0916 13:40:03.009609  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-190751-m03
	I0916 13:40:03.009621  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:03.009638  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:03.009644  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:03.013202  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:03.209298  735111 request.go:632] Waited for 195.381571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:40:03.209392  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:40:03.209400  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:03.209411  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:03.209420  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:03.212635  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:03.213153  735111 pod_ready.go:93] pod "kube-controller-manager-ha-190751-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:03.213176  735111 pod_ready.go:82] duration metric: took 398.684012ms for pod "kube-controller-manager-ha-190751-m03" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:03.213190  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-24q9n" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:03.409330  735111 request.go:632] Waited for 196.051127ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-proxy-24q9n
	I0916 13:40:03.409437  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-proxy-24q9n
	I0916 13:40:03.409449  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:03.409459  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:03.409467  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:03.412516  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:03.609651  735111 request.go:632] Waited for 196.394591ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:40:03.609742  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:40:03.609749  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:03.609761  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:03.609772  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:03.613665  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:03.614254  735111 pod_ready.go:93] pod "kube-proxy-24q9n" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:03.614281  735111 pod_ready.go:82] duration metric: took 401.084241ms for pod "kube-proxy-24q9n" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:03.614292  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9d7kt" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:03.809285  735111 request.go:632] Waited for 194.919635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9d7kt
	I0916 13:40:03.809367  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9d7kt
	I0916 13:40:03.809383  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:03.809394  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:03.809405  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:03.812801  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:04.008838  735111 request.go:632] Waited for 195.285686ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:40:04.008898  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:40:04.008903  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:04.008911  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:04.008931  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:04.012287  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:04.013023  735111 pod_ready.go:93] pod "kube-proxy-9d7kt" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:04.013043  735111 pod_ready.go:82] duration metric: took 398.743498ms for pod "kube-proxy-9d7kt" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:04.013052  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9lpwl" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:04.209207  735111 request.go:632] Waited for 196.061561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9lpwl
	I0916 13:40:04.209312  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9lpwl
	I0916 13:40:04.209322  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:04.209331  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:04.209340  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:04.213188  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:04.409393  735111 request.go:632] Waited for 195.377416ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:40:04.409499  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:40:04.409516  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:04.409525  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:04.409532  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:04.412966  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:04.413420  735111 pod_ready.go:93] pod "kube-proxy-9lpwl" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:04.413439  735111 pod_ready.go:82] duration metric: took 400.376846ms for pod "kube-proxy-9lpwl" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:04.413448  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:04.609529  735111 request.go:632] Waited for 195.97896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-190751
	I0916 13:40:04.609609  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-190751
	I0916 13:40:04.609618  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:04.609631  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:04.609643  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:04.613259  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:04.809335  735111 request.go:632] Waited for 195.383746ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:40:04.809422  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:40:04.809430  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:04.809439  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:04.809458  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:04.812751  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:04.813354  735111 pod_ready.go:93] pod "kube-scheduler-ha-190751" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:04.813380  735111 pod_ready.go:82] duration metric: took 399.924701ms for pod "kube-scheduler-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:04.813393  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:05.009758  735111 request.go:632] Waited for 196.25195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-190751-m02
	I0916 13:40:05.009832  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-190751-m02
	I0916 13:40:05.009839  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:05.009848  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:05.009852  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:05.012798  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:40:05.209800  735111 request.go:632] Waited for 196.394637ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:40:05.209899  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:40:05.209911  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:05.209922  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:05.209933  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:05.213079  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:05.213806  735111 pod_ready.go:93] pod "kube-scheduler-ha-190751-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:05.213830  735111 pod_ready.go:82] duration metric: took 400.426093ms for pod "kube-scheduler-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:05.213842  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-190751-m03" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:05.409784  735111 request.go:632] Waited for 195.838547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-190751-m03
	I0916 13:40:05.409860  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-190751-m03
	I0916 13:40:05.409871  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:05.409883  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:05.409894  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:05.413051  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:05.609259  735111 request.go:632] Waited for 195.400698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:40:05.609333  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:40:05.609359  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:05.609375  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:05.609382  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:05.612448  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:05.613005  735111 pod_ready.go:93] pod "kube-scheduler-ha-190751-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:05.613026  735111 pod_ready.go:82] duration metric: took 399.175294ms for pod "kube-scheduler-ha-190751-m03" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:05.613039  735111 pod_ready.go:39] duration metric: took 5.199652226s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 13:40:05.613057  735111 api_server.go:52] waiting for apiserver process to appear ...
	I0916 13:40:05.613111  735111 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 13:40:05.638750  735111 api_server.go:72] duration metric: took 19.543336492s to wait for apiserver process to appear ...
	I0916 13:40:05.638783  735111 api_server.go:88] waiting for apiserver healthz status ...
	I0916 13:40:05.638810  735111 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8443/healthz ...
	I0916 13:40:05.644921  735111 api_server.go:279] https://192.168.39.94:8443/healthz returned 200:
	ok
	I0916 13:40:05.645004  735111 round_trippers.go:463] GET https://192.168.39.94:8443/version
	I0916 13:40:05.645014  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:05.645025  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:05.645033  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:05.645737  735111 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 13:40:05.645818  735111 api_server.go:141] control plane version: v1.31.1
	I0916 13:40:05.645833  735111 api_server.go:131] duration metric: took 7.043412ms to wait for apiserver health ...
	I0916 13:40:05.645841  735111 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 13:40:05.809279  735111 request.go:632] Waited for 163.352733ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods
	I0916 13:40:05.809374  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods
	I0916 13:40:05.809382  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:05.809392  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:05.809398  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:05.815851  735111 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 13:40:05.823541  735111 system_pods.go:59] 24 kube-system pods found
	I0916 13:40:05.823571  735111 system_pods.go:61] "coredns-7c65d6cfc9-9lw8n" [19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9] Running
	I0916 13:40:05.823577  735111 system_pods.go:61] "coredns-7c65d6cfc9-gzkpj" [4e0ada83-1020-4bd4-be70-9a1a5972ff59] Running
	I0916 13:40:05.823581  735111 system_pods.go:61] "etcd-ha-190751" [be88be37-91ce-48e8-9f8b-d3103b49ba3c] Running
	I0916 13:40:05.823585  735111 system_pods.go:61] "etcd-ha-190751-m02" [12d190fd-ee89-4c15-9807-992ea738cbf8] Running
	I0916 13:40:05.823588  735111 system_pods.go:61] "etcd-ha-190751-m03" [8b48a663-3100-4e8e-823e-6768605b14ee] Running
	I0916 13:40:05.823591  735111 system_pods.go:61] "kindnet-gpb96" [bb699362-acf1-471c-8b39-8a7498a7da52] Running
	I0916 13:40:05.823594  735111 system_pods.go:61] "kindnet-qfl9j" [c3185688-2626-48af-9067-60c59d3fc806] Running
	I0916 13:40:05.823597  735111 system_pods.go:61] "kindnet-s7765" [0d614281-1ace-45f4-9f14-a5080a46ce0a] Running
	I0916 13:40:05.823600  735111 system_pods.go:61] "kube-apiserver-ha-190751" [c91fdd4e-99d4-4130-8240-0ae5f9339cd0] Running
	I0916 13:40:05.823603  735111 system_pods.go:61] "kube-apiserver-ha-190751-m02" [bdbe2c9a-88c9-468e-b902-daddcf463dad] Running
	I0916 13:40:05.823608  735111 system_pods.go:61] "kube-apiserver-ha-190751-m03" [6a098e94-9f6a-4b74-bc97-b9549edd3285] Running
	I0916 13:40:05.823611  735111 system_pods.go:61] "kube-controller-manager-ha-190751" [fefa0f76-38b3-4138-8e0a-d9ac18bdbeac] Running
	I0916 13:40:05.823614  735111 system_pods.go:61] "kube-controller-manager-ha-190751-m02" [22abf056-bbbc-4702-aed6-60aa470bc87d] Running
	I0916 13:40:05.823618  735111 system_pods.go:61] "kube-controller-manager-ha-190751-m03" [773d2c17-c182-40a1-b335-b03d6b030d7a] Running
	I0916 13:40:05.823621  735111 system_pods.go:61] "kube-proxy-24q9n" [12db4b5d-002f-4e38-95a1-3b12747c80a3] Running
	I0916 13:40:05.823624  735111 system_pods.go:61] "kube-proxy-9d7kt" [ba8c34d1-5931-4e70-8d01-798817397f78] Running
	I0916 13:40:05.823627  735111 system_pods.go:61] "kube-proxy-9lpwl" [e12b5081-66dd-4aa1-9fc8-ff9aa93e1618] Running
	I0916 13:40:05.823630  735111 system_pods.go:61] "kube-scheduler-ha-190751" [677eae56-307b-4bef-939e-5eae5b8a3fff] Running
	I0916 13:40:05.823634  735111 system_pods.go:61] "kube-scheduler-ha-190751-m02" [9c09f981-ca69-420f-87c7-2a9c6692b9d7] Running
	I0916 13:40:05.823637  735111 system_pods.go:61] "kube-scheduler-ha-190751-m03" [eafd129c-21e3-4841-84d0-81f629684de9] Running
	I0916 13:40:05.823639  735111 system_pods.go:61] "kube-vip-ha-190751" [d979d6e0-d0db-4fe1-a8e7-d8e361f20a88] Running
	I0916 13:40:05.823642  735111 system_pods.go:61] "kube-vip-ha-190751-m02" [1c08285c-dafc-45f7-b1b3-dc86bf623fde] Running
	I0916 13:40:05.823646  735111 system_pods.go:61] "kube-vip-ha-190751-m03" [66c7d0df-b50f-41ad-b9f9-c9a48748390b] Running
	I0916 13:40:05.823651  735111 system_pods.go:61] "storage-provisioner" [f01b81dc-2ff8-41de-8c63-e09a0ead6545] Running
	I0916 13:40:05.823657  735111 system_pods.go:74] duration metric: took 177.8116ms to wait for pod list to return data ...
	I0916 13:40:05.823665  735111 default_sa.go:34] waiting for default service account to be created ...
	I0916 13:40:06.009131  735111 request.go:632] Waited for 185.378336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/default/serviceaccounts
	I0916 13:40:06.009213  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/default/serviceaccounts
	I0916 13:40:06.009223  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:06.009234  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:06.009243  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:06.012758  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:06.012887  735111 default_sa.go:45] found service account: "default"
	I0916 13:40:06.012901  735111 default_sa.go:55] duration metric: took 189.229884ms for default service account to be created ...
	I0916 13:40:06.012909  735111 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 13:40:06.209214  735111 request.go:632] Waited for 196.217871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods
	I0916 13:40:06.209293  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods
	I0916 13:40:06.209310  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:06.209331  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:06.209356  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:06.216560  735111 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0916 13:40:06.223463  735111 system_pods.go:86] 24 kube-system pods found
	I0916 13:40:06.223491  735111 system_pods.go:89] "coredns-7c65d6cfc9-9lw8n" [19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9] Running
	I0916 13:40:06.223497  735111 system_pods.go:89] "coredns-7c65d6cfc9-gzkpj" [4e0ada83-1020-4bd4-be70-9a1a5972ff59] Running
	I0916 13:40:06.223501  735111 system_pods.go:89] "etcd-ha-190751" [be88be37-91ce-48e8-9f8b-d3103b49ba3c] Running
	I0916 13:40:06.223505  735111 system_pods.go:89] "etcd-ha-190751-m02" [12d190fd-ee89-4c15-9807-992ea738cbf8] Running
	I0916 13:40:06.223509  735111 system_pods.go:89] "etcd-ha-190751-m03" [8b48a663-3100-4e8e-823e-6768605b14ee] Running
	I0916 13:40:06.223512  735111 system_pods.go:89] "kindnet-gpb96" [bb699362-acf1-471c-8b39-8a7498a7da52] Running
	I0916 13:40:06.223516  735111 system_pods.go:89] "kindnet-qfl9j" [c3185688-2626-48af-9067-60c59d3fc806] Running
	I0916 13:40:06.223520  735111 system_pods.go:89] "kindnet-s7765" [0d614281-1ace-45f4-9f14-a5080a46ce0a] Running
	I0916 13:40:06.223523  735111 system_pods.go:89] "kube-apiserver-ha-190751" [c91fdd4e-99d4-4130-8240-0ae5f9339cd0] Running
	I0916 13:40:06.223526  735111 system_pods.go:89] "kube-apiserver-ha-190751-m02" [bdbe2c9a-88c9-468e-b902-daddcf463dad] Running
	I0916 13:40:06.223529  735111 system_pods.go:89] "kube-apiserver-ha-190751-m03" [6a098e94-9f6a-4b74-bc97-b9549edd3285] Running
	I0916 13:40:06.223532  735111 system_pods.go:89] "kube-controller-manager-ha-190751" [fefa0f76-38b3-4138-8e0a-d9ac18bdbeac] Running
	I0916 13:40:06.223536  735111 system_pods.go:89] "kube-controller-manager-ha-190751-m02" [22abf056-bbbc-4702-aed6-60aa470bc87d] Running
	I0916 13:40:06.223539  735111 system_pods.go:89] "kube-controller-manager-ha-190751-m03" [773d2c17-c182-40a1-b335-b03d6b030d7a] Running
	I0916 13:40:06.223542  735111 system_pods.go:89] "kube-proxy-24q9n" [12db4b5d-002f-4e38-95a1-3b12747c80a3] Running
	I0916 13:40:06.223545  735111 system_pods.go:89] "kube-proxy-9d7kt" [ba8c34d1-5931-4e70-8d01-798817397f78] Running
	I0916 13:40:06.223548  735111 system_pods.go:89] "kube-proxy-9lpwl" [e12b5081-66dd-4aa1-9fc8-ff9aa93e1618] Running
	I0916 13:40:06.223551  735111 system_pods.go:89] "kube-scheduler-ha-190751" [677eae56-307b-4bef-939e-5eae5b8a3fff] Running
	I0916 13:40:06.223554  735111 system_pods.go:89] "kube-scheduler-ha-190751-m02" [9c09f981-ca69-420f-87c7-2a9c6692b9d7] Running
	I0916 13:40:06.223557  735111 system_pods.go:89] "kube-scheduler-ha-190751-m03" [eafd129c-21e3-4841-84d0-81f629684de9] Running
	I0916 13:40:06.223560  735111 system_pods.go:89] "kube-vip-ha-190751" [d979d6e0-d0db-4fe1-a8e7-d8e361f20a88] Running
	I0916 13:40:06.223564  735111 system_pods.go:89] "kube-vip-ha-190751-m02" [1c08285c-dafc-45f7-b1b3-dc86bf623fde] Running
	I0916 13:40:06.223567  735111 system_pods.go:89] "kube-vip-ha-190751-m03" [66c7d0df-b50f-41ad-b9f9-c9a48748390b] Running
	I0916 13:40:06.223569  735111 system_pods.go:89] "storage-provisioner" [f01b81dc-2ff8-41de-8c63-e09a0ead6545] Running
	I0916 13:40:06.223579  735111 system_pods.go:126] duration metric: took 210.665549ms to wait for k8s-apps to be running ...
	I0916 13:40:06.223589  735111 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 13:40:06.223634  735111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:40:06.239619  735111 system_svc.go:56] duration metric: took 16.018236ms WaitForService to wait for kubelet
	I0916 13:40:06.239654  735111 kubeadm.go:582] duration metric: took 20.144246804s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 13:40:06.239677  735111 node_conditions.go:102] verifying NodePressure condition ...
	I0916 13:40:06.409601  735111 request.go:632] Waited for 169.742083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes
	I0916 13:40:06.409694  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes
	I0916 13:40:06.409706  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:06.409775  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:06.409792  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:06.413568  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:06.414639  735111 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 13:40:06.414663  735111 node_conditions.go:123] node cpu capacity is 2
	I0916 13:40:06.414684  735111 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 13:40:06.414691  735111 node_conditions.go:123] node cpu capacity is 2
	I0916 13:40:06.414698  735111 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 13:40:06.414703  735111 node_conditions.go:123] node cpu capacity is 2
	I0916 13:40:06.414711  735111 node_conditions.go:105] duration metric: took 175.028902ms to run NodePressure ...
	I0916 13:40:06.414729  735111 start.go:241] waiting for startup goroutines ...
	I0916 13:40:06.414759  735111 start.go:255] writing updated cluster config ...
	I0916 13:40:06.415139  735111 ssh_runner.go:195] Run: rm -f paused
	I0916 13:40:06.465132  735111 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0916 13:40:06.467878  735111 out.go:177] * Done! kubectl is now configured to use "ha-190751" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 16 13:43:44 ha-190751 crio[667]: time="2024-09-16 13:43:44.866459400Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494224866437428,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cb73f92a-6207-4498-9f9e-4c0285a75656 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:43:44 ha-190751 crio[667]: time="2024-09-16 13:43:44.866989009Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8ab640fd-74e8-47e8-b301-9612f45ddf5a name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:43:44 ha-190751 crio[667]: time="2024-09-16 13:43:44.867063469Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8ab640fd-74e8-47e8-b301-9612f45ddf5a name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:43:44 ha-190751 crio[667]: time="2024-09-16 13:43:44.867310497Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1ff16b4cf488d896605be284a1159f722aa4cc147bb74a8eeaf47bee3912ead0,PodSandboxId:70804a075dc34bfcfcd945e41bc9b9b50887dfbed8832df3453a49df237f3a10,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726494009959572833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lsqcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa0c38d7-fa7a-4b02-b417-1da8e210cc78,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5597ff6fa9128f07d2dc3f058b9b448395d0989aa657629ef5c6819b33cc8cb7,PodSandboxId:faf5324ae84ec325360c692d7e663f4a36e234c8403a4e72f80d57211acd5a2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726493905850761514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9lw8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e33b03d2f6fce87730d338d716b579f61fa7dca1205bac35abaf88257659f781,PodSandboxId:d74b47a92fc73e9c9e0646cddd475b1d9c4c084abec46863815d97b0f05bd238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726493905853240042,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gzkpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4e0ada83-1020-4bd4-be70-9a1a5972ff59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85e2956fe35237a31eb3777a4db47ef14cfd27c1fa6b47b8e68d421b6f0388b0,PodSandboxId:a8d65f7a2c445bbd65845feaa6d44e7a6803741ece5c02dc2af29bc92b856eda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726493904181656934,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f01b81dc-2ff8-41de-8c63-e09a0ead6545,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2fb4efd07b928023ce922b08d4d29585e3080441cdb212649ac1338243874ee,PodSandboxId:e227eb76eed28456da60c41632338b32cbb3ec7c34407c7745860a265438ce7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172649386
3271598131,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9d7kt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba8c34d1-5931-4e70-8d01-798817397f78,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c9f45c384802a996dd22d917975d86b875cbde33520b6bfb8ec6f84b39629,PodSandboxId:06c5005bbb7151b021f0bc1b7f3e8818b673f7067ec8acf264d4919832abfb8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726493862131295974,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gpb96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb699362-acf1-471c-8b39-8a7498a7da52,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce48d6fe2a10977168e6aa4159b5fa451fbf190ee313d8d6500cf399312b4061,PodSandboxId:6ef66800e15f664d46f5fea0bf074e6d8f215f27a6826e5c7c3ce86e05c27ec2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726493852347558523,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96ae23ac25e2d9f21a57c25091692659,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd93f6d25b96fcafeadbe4368203439d003e6e60832e2405318039bac48cd90,PodSandboxId:235857e1be3ea44c435d98b63c4e4bf947b816eb9121b4867264d82144ce5cc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726493850593194695,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2cc73ce1a8f746d45b3276bee469d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13c8d0e1fdcbee87a87cace216d5dc79bc82e8045e7d582390ca41efdbcadcad,PodSandboxId:a61ae034ef53d2ad3541baf2947573411c903bab5f21e57550892cd37fb14c67,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726493850589583740,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae495349ac02bb4b5addcdcea0d25715,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb375fdf3e21c70ce4d6d7afaeb7e323643bddc06490de3e9e9973f9817f85b,PodSandboxId:42b1cda382f84b5d55beb45c086e32038dc725eb83913efd7ede62eb7011958a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726493850576490172,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a42ea5903905c847366e72d48200db,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2fdc916e364191824e8eeeeebd2bd4bde311ec642553730ff1fa83d5ae6b3c,PodSandboxId:2b68d5be2f2cfa03aea5cc5c13039a8c244e9a8260f12dd48010acb6164d6332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726493850404260188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-190751,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9922f803bd7b5d0ba2ffa0c06886b9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8ab640fd-74e8-47e8-b301-9612f45ddf5a name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:43:44 ha-190751 crio[667]: time="2024-09-16 13:43:44.904465481Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3fd77ca7-e503-44cd-a9dd-ad4725b06118 name=/runtime.v1.RuntimeService/Version
	Sep 16 13:43:44 ha-190751 crio[667]: time="2024-09-16 13:43:44.904547348Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3fd77ca7-e503-44cd-a9dd-ad4725b06118 name=/runtime.v1.RuntimeService/Version
	Sep 16 13:43:44 ha-190751 crio[667]: time="2024-09-16 13:43:44.906020778Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=93a61114-4d2b-436f-98c0-fb16b6325214 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:43:44 ha-190751 crio[667]: time="2024-09-16 13:43:44.906436050Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494224906415184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=93a61114-4d2b-436f-98c0-fb16b6325214 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:43:44 ha-190751 crio[667]: time="2024-09-16 13:43:44.907158482Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5306fe21-dd31-4814-b1f2-0199463388a6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:43:44 ha-190751 crio[667]: time="2024-09-16 13:43:44.907227671Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5306fe21-dd31-4814-b1f2-0199463388a6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:43:44 ha-190751 crio[667]: time="2024-09-16 13:43:44.907464849Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1ff16b4cf488d896605be284a1159f722aa4cc147bb74a8eeaf47bee3912ead0,PodSandboxId:70804a075dc34bfcfcd945e41bc9b9b50887dfbed8832df3453a49df237f3a10,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726494009959572833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lsqcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa0c38d7-fa7a-4b02-b417-1da8e210cc78,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5597ff6fa9128f07d2dc3f058b9b448395d0989aa657629ef5c6819b33cc8cb7,PodSandboxId:faf5324ae84ec325360c692d7e663f4a36e234c8403a4e72f80d57211acd5a2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726493905850761514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9lw8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e33b03d2f6fce87730d338d716b579f61fa7dca1205bac35abaf88257659f781,PodSandboxId:d74b47a92fc73e9c9e0646cddd475b1d9c4c084abec46863815d97b0f05bd238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726493905853240042,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gzkpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4e0ada83-1020-4bd4-be70-9a1a5972ff59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85e2956fe35237a31eb3777a4db47ef14cfd27c1fa6b47b8e68d421b6f0388b0,PodSandboxId:a8d65f7a2c445bbd65845feaa6d44e7a6803741ece5c02dc2af29bc92b856eda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726493904181656934,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f01b81dc-2ff8-41de-8c63-e09a0ead6545,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2fb4efd07b928023ce922b08d4d29585e3080441cdb212649ac1338243874ee,PodSandboxId:e227eb76eed28456da60c41632338b32cbb3ec7c34407c7745860a265438ce7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172649386
3271598131,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9d7kt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba8c34d1-5931-4e70-8d01-798817397f78,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c9f45c384802a996dd22d917975d86b875cbde33520b6bfb8ec6f84b39629,PodSandboxId:06c5005bbb7151b021f0bc1b7f3e8818b673f7067ec8acf264d4919832abfb8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726493862131295974,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gpb96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb699362-acf1-471c-8b39-8a7498a7da52,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce48d6fe2a10977168e6aa4159b5fa451fbf190ee313d8d6500cf399312b4061,PodSandboxId:6ef66800e15f664d46f5fea0bf074e6d8f215f27a6826e5c7c3ce86e05c27ec2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726493852347558523,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96ae23ac25e2d9f21a57c25091692659,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd93f6d25b96fcafeadbe4368203439d003e6e60832e2405318039bac48cd90,PodSandboxId:235857e1be3ea44c435d98b63c4e4bf947b816eb9121b4867264d82144ce5cc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726493850593194695,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2cc73ce1a8f746d45b3276bee469d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13c8d0e1fdcbee87a87cace216d5dc79bc82e8045e7d582390ca41efdbcadcad,PodSandboxId:a61ae034ef53d2ad3541baf2947573411c903bab5f21e57550892cd37fb14c67,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726493850589583740,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae495349ac02bb4b5addcdcea0d25715,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb375fdf3e21c70ce4d6d7afaeb7e323643bddc06490de3e9e9973f9817f85b,PodSandboxId:42b1cda382f84b5d55beb45c086e32038dc725eb83913efd7ede62eb7011958a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726493850576490172,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a42ea5903905c847366e72d48200db,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2fdc916e364191824e8eeeeebd2bd4bde311ec642553730ff1fa83d5ae6b3c,PodSandboxId:2b68d5be2f2cfa03aea5cc5c13039a8c244e9a8260f12dd48010acb6164d6332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726493850404260188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-190751,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9922f803bd7b5d0ba2ffa0c06886b9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5306fe21-dd31-4814-b1f2-0199463388a6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:43:44 ha-190751 crio[667]: time="2024-09-16 13:43:44.948266164Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc44502d-a538-4695-95dc-b20729ec0ead name=/runtime.v1.RuntimeService/Version
	Sep 16 13:43:44 ha-190751 crio[667]: time="2024-09-16 13:43:44.948338288Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc44502d-a538-4695-95dc-b20729ec0ead name=/runtime.v1.RuntimeService/Version
	Sep 16 13:43:44 ha-190751 crio[667]: time="2024-09-16 13:43:44.949482467Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b831e6ab-93e3-40fe-bfbf-63cd86c0b7e6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:43:44 ha-190751 crio[667]: time="2024-09-16 13:43:44.950122860Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494224950101403,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b831e6ab-93e3-40fe-bfbf-63cd86c0b7e6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:43:44 ha-190751 crio[667]: time="2024-09-16 13:43:44.950751100Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47988353-cbe2-44b8-8f45-e3cd73e20f0d name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:43:44 ha-190751 crio[667]: time="2024-09-16 13:43:44.950809223Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47988353-cbe2-44b8-8f45-e3cd73e20f0d name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:43:44 ha-190751 crio[667]: time="2024-09-16 13:43:44.951108321Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1ff16b4cf488d896605be284a1159f722aa4cc147bb74a8eeaf47bee3912ead0,PodSandboxId:70804a075dc34bfcfcd945e41bc9b9b50887dfbed8832df3453a49df237f3a10,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726494009959572833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lsqcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa0c38d7-fa7a-4b02-b417-1da8e210cc78,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5597ff6fa9128f07d2dc3f058b9b448395d0989aa657629ef5c6819b33cc8cb7,PodSandboxId:faf5324ae84ec325360c692d7e663f4a36e234c8403a4e72f80d57211acd5a2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726493905850761514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9lw8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e33b03d2f6fce87730d338d716b579f61fa7dca1205bac35abaf88257659f781,PodSandboxId:d74b47a92fc73e9c9e0646cddd475b1d9c4c084abec46863815d97b0f05bd238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726493905853240042,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gzkpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4e0ada83-1020-4bd4-be70-9a1a5972ff59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85e2956fe35237a31eb3777a4db47ef14cfd27c1fa6b47b8e68d421b6f0388b0,PodSandboxId:a8d65f7a2c445bbd65845feaa6d44e7a6803741ece5c02dc2af29bc92b856eda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726493904181656934,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f01b81dc-2ff8-41de-8c63-e09a0ead6545,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2fb4efd07b928023ce922b08d4d29585e3080441cdb212649ac1338243874ee,PodSandboxId:e227eb76eed28456da60c41632338b32cbb3ec7c34407c7745860a265438ce7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172649386
3271598131,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9d7kt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba8c34d1-5931-4e70-8d01-798817397f78,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c9f45c384802a996dd22d917975d86b875cbde33520b6bfb8ec6f84b39629,PodSandboxId:06c5005bbb7151b021f0bc1b7f3e8818b673f7067ec8acf264d4919832abfb8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726493862131295974,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gpb96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb699362-acf1-471c-8b39-8a7498a7da52,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce48d6fe2a10977168e6aa4159b5fa451fbf190ee313d8d6500cf399312b4061,PodSandboxId:6ef66800e15f664d46f5fea0bf074e6d8f215f27a6826e5c7c3ce86e05c27ec2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726493852347558523,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96ae23ac25e2d9f21a57c25091692659,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd93f6d25b96fcafeadbe4368203439d003e6e60832e2405318039bac48cd90,PodSandboxId:235857e1be3ea44c435d98b63c4e4bf947b816eb9121b4867264d82144ce5cc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726493850593194695,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2cc73ce1a8f746d45b3276bee469d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13c8d0e1fdcbee87a87cace216d5dc79bc82e8045e7d582390ca41efdbcadcad,PodSandboxId:a61ae034ef53d2ad3541baf2947573411c903bab5f21e57550892cd37fb14c67,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726493850589583740,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae495349ac02bb4b5addcdcea0d25715,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb375fdf3e21c70ce4d6d7afaeb7e323643bddc06490de3e9e9973f9817f85b,PodSandboxId:42b1cda382f84b5d55beb45c086e32038dc725eb83913efd7ede62eb7011958a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726493850576490172,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a42ea5903905c847366e72d48200db,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2fdc916e364191824e8eeeeebd2bd4bde311ec642553730ff1fa83d5ae6b3c,PodSandboxId:2b68d5be2f2cfa03aea5cc5c13039a8c244e9a8260f12dd48010acb6164d6332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726493850404260188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-190751,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9922f803bd7b5d0ba2ffa0c06886b9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=47988353-cbe2-44b8-8f45-e3cd73e20f0d name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:43:44 ha-190751 crio[667]: time="2024-09-16 13:43:44.988433590Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a99a185b-bb65-434a-8fd2-a7bf50d00912 name=/runtime.v1.RuntimeService/Version
	Sep 16 13:43:44 ha-190751 crio[667]: time="2024-09-16 13:43:44.988515797Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a99a185b-bb65-434a-8fd2-a7bf50d00912 name=/runtime.v1.RuntimeService/Version
	Sep 16 13:43:44 ha-190751 crio[667]: time="2024-09-16 13:43:44.990606688Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24042189-af14-4b0e-93e4-d458bd6570c2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:43:44 ha-190751 crio[667]: time="2024-09-16 13:43:44.991119959Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494224991096582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24042189-af14-4b0e-93e4-d458bd6570c2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:43:44 ha-190751 crio[667]: time="2024-09-16 13:43:44.991581012Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=efd06583-5bc3-48bf-92e3-3510f8903694 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:43:44 ha-190751 crio[667]: time="2024-09-16 13:43:44.991651530Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=efd06583-5bc3-48bf-92e3-3510f8903694 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:43:44 ha-190751 crio[667]: time="2024-09-16 13:43:44.991952523Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1ff16b4cf488d896605be284a1159f722aa4cc147bb74a8eeaf47bee3912ead0,PodSandboxId:70804a075dc34bfcfcd945e41bc9b9b50887dfbed8832df3453a49df237f3a10,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726494009959572833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lsqcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa0c38d7-fa7a-4b02-b417-1da8e210cc78,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5597ff6fa9128f07d2dc3f058b9b448395d0989aa657629ef5c6819b33cc8cb7,PodSandboxId:faf5324ae84ec325360c692d7e663f4a36e234c8403a4e72f80d57211acd5a2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726493905850761514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9lw8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e33b03d2f6fce87730d338d716b579f61fa7dca1205bac35abaf88257659f781,PodSandboxId:d74b47a92fc73e9c9e0646cddd475b1d9c4c084abec46863815d97b0f05bd238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726493905853240042,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gzkpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4e0ada83-1020-4bd4-be70-9a1a5972ff59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85e2956fe35237a31eb3777a4db47ef14cfd27c1fa6b47b8e68d421b6f0388b0,PodSandboxId:a8d65f7a2c445bbd65845feaa6d44e7a6803741ece5c02dc2af29bc92b856eda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726493904181656934,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f01b81dc-2ff8-41de-8c63-e09a0ead6545,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2fb4efd07b928023ce922b08d4d29585e3080441cdb212649ac1338243874ee,PodSandboxId:e227eb76eed28456da60c41632338b32cbb3ec7c34407c7745860a265438ce7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172649386
3271598131,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9d7kt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba8c34d1-5931-4e70-8d01-798817397f78,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c9f45c384802a996dd22d917975d86b875cbde33520b6bfb8ec6f84b39629,PodSandboxId:06c5005bbb7151b021f0bc1b7f3e8818b673f7067ec8acf264d4919832abfb8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726493862131295974,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gpb96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb699362-acf1-471c-8b39-8a7498a7da52,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce48d6fe2a10977168e6aa4159b5fa451fbf190ee313d8d6500cf399312b4061,PodSandboxId:6ef66800e15f664d46f5fea0bf074e6d8f215f27a6826e5c7c3ce86e05c27ec2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726493852347558523,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96ae23ac25e2d9f21a57c25091692659,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd93f6d25b96fcafeadbe4368203439d003e6e60832e2405318039bac48cd90,PodSandboxId:235857e1be3ea44c435d98b63c4e4bf947b816eb9121b4867264d82144ce5cc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726493850593194695,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2cc73ce1a8f746d45b3276bee469d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13c8d0e1fdcbee87a87cace216d5dc79bc82e8045e7d582390ca41efdbcadcad,PodSandboxId:a61ae034ef53d2ad3541baf2947573411c903bab5f21e57550892cd37fb14c67,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726493850589583740,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae495349ac02bb4b5addcdcea0d25715,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb375fdf3e21c70ce4d6d7afaeb7e323643bddc06490de3e9e9973f9817f85b,PodSandboxId:42b1cda382f84b5d55beb45c086e32038dc725eb83913efd7ede62eb7011958a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726493850576490172,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a42ea5903905c847366e72d48200db,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2fdc916e364191824e8eeeeebd2bd4bde311ec642553730ff1fa83d5ae6b3c,PodSandboxId:2b68d5be2f2cfa03aea5cc5c13039a8c244e9a8260f12dd48010acb6164d6332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726493850404260188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-190751,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9922f803bd7b5d0ba2ffa0c06886b9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=efd06583-5bc3-48bf-92e3-3510f8903694 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1ff16b4cf488d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   70804a075dc34       busybox-7dff88458-lsqcp
	e33b03d2f6fce       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   d74b47a92fc73       coredns-7c65d6cfc9-gzkpj
	5597ff6fa9128       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   faf5324ae84ec       coredns-7c65d6cfc9-9lw8n
	85e2956fe3523       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   a8d65f7a2c445       storage-provisioner
	d2fb4efd07b92       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   e227eb76eed28       kube-proxy-9d7kt
	876c9f45c3848       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   06c5005bbb715       kindnet-gpb96
	ce48d6fe2a109       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   6ef66800e15f6       kube-vip-ha-190751
	0cd93f6d25b96       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   235857e1be3ea       etcd-ha-190751
	13c8d0e1fdcbe       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   a61ae034ef53d       kube-controller-manager-ha-190751
	2cb375fdf3e21       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   42b1cda382f84       kube-apiserver-ha-190751
	3d2fdc916e364       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   2b68d5be2f2cf       kube-scheduler-ha-190751
	
	
	==> coredns [5597ff6fa9128f07d2dc3f058b9b448395d0989aa657629ef5c6819b33cc8cb7] <==
	[INFO] 10.244.0.4:44564 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000175716s
	[INFO] 10.244.2.2:52543 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136907s
	[INFO] 10.244.2.2:35351 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001547835s
	[INFO] 10.244.2.2:39675 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165265s
	[INFO] 10.244.2.2:37048 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001066948s
	[INFO] 10.244.2.2:56795 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069535s
	[INFO] 10.244.1.2:57890 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135841s
	[INFO] 10.244.1.2:47650 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001636029s
	[INFO] 10.244.1.2:50206 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099676s
	[INFO] 10.244.1.2:55092 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109421s
	[INFO] 10.244.0.4:53870 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097861s
	[INFO] 10.244.0.4:42443 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000049844s
	[INFO] 10.244.0.4:52687 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057203s
	[INFO] 10.244.2.2:34837 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122205s
	[INFO] 10.244.2.2:39661 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123335s
	[INFO] 10.244.2.2:52074 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080782s
	[INFO] 10.244.1.2:41492 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098139s
	[INFO] 10.244.1.2:49674 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088502s
	[INFO] 10.244.0.4:53518 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000259854s
	[INFO] 10.244.0.4:41118 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000155352s
	[INFO] 10.244.0.4:33823 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000119363s
	[INFO] 10.244.2.2:44582 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000180459s
	[INFO] 10.244.2.2:52118 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000196503s
	[INFO] 10.244.1.2:43708 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011298s
	[INFO] 10.244.1.2:42623 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00011952s
	
	
	==> coredns [e33b03d2f6fce87730d338d716b579f61fa7dca1205bac35abaf88257659f781] <==
	[INFO] 10.244.2.2:59563 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000156884s
	[INFO] 10.244.1.2:58517 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.00014134s
	[INFO] 10.244.1.2:36244 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000672675s
	[INFO] 10.244.1.2:37179 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001780819s
	[INFO] 10.244.0.4:50469 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000268768s
	[INFO] 10.244.0.4:48039 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000163904s
	[INFO] 10.244.0.4:34482 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084666s
	[INFO] 10.244.0.4:39892 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003221704s
	[INFO] 10.244.0.4:58788 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139358s
	[INFO] 10.244.2.2:57520 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099764s
	[INFO] 10.244.2.2:33023 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000142913s
	[INFO] 10.244.2.2:46886 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000071348s
	[INFO] 10.244.1.2:48181 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120675s
	[INFO] 10.244.1.2:46254 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007984s
	[INFO] 10.244.1.2:51236 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001105782s
	[INFO] 10.244.1.2:43880 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069986s
	[INFO] 10.244.0.4:51480 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109815s
	[INFO] 10.244.2.2:33439 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156091s
	[INFO] 10.244.1.2:40338 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000202214s
	[INFO] 10.244.1.2:41511 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135597s
	[INFO] 10.244.0.4:57318 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142285s
	[INFO] 10.244.2.2:51122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159294s
	[INFO] 10.244.2.2:45477 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00016112s
	[INFO] 10.244.1.2:53140 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015857s
	[INFO] 10.244.1.2:56526 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000182857s
	
	
	==> describe nodes <==
	Name:               ha-190751
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-190751
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86
	                    minikube.k8s.io/name=ha-190751
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T13_37_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 13:37:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-190751
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 13:43:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 13:40:42 +0000   Mon, 16 Sep 2024 13:37:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 13:40:42 +0000   Mon, 16 Sep 2024 13:37:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 13:40:42 +0000   Mon, 16 Sep 2024 13:37:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 13:40:42 +0000   Mon, 16 Sep 2024 13:38:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.94
	  Hostname:    ha-190751
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 413212b342c542b3a63285d76f88cc9f
	  System UUID:                413212b3-42c5-42b3-a632-85d76f88cc9f
	  Boot ID:                    757a1925-23d7-4d65-93ec-732a8b69642f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lsqcp              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 coredns-7c65d6cfc9-9lw8n             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m4s
	  kube-system                 coredns-7c65d6cfc9-gzkpj             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m4s
	  kube-system                 etcd-ha-190751                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m7s
	  kube-system                 kindnet-gpb96                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m4s
	  kube-system                 kube-apiserver-ha-190751             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 kube-controller-manager-ha-190751    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 kube-proxy-9d7kt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 kube-scheduler-ha-190751             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 kube-vip-ha-190751                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m1s   kube-proxy       
	  Normal  Starting                 6m6s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m6s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m6s   kubelet          Node ha-190751 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m6s   kubelet          Node ha-190751 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m6s   kubelet          Node ha-190751 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m5s   node-controller  Node ha-190751 event: Registered Node ha-190751 in Controller
	  Normal  NodeReady                5m22s  kubelet          Node ha-190751 status is now: NodeReady
	  Normal  RegisteredNode           5m7s   node-controller  Node ha-190751 event: Registered Node ha-190751 in Controller
	  Normal  RegisteredNode           3m54s  node-controller  Node ha-190751 event: Registered Node ha-190751 in Controller
	
	
	Name:               ha-190751-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-190751-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86
	                    minikube.k8s.io/name=ha-190751
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T13_38_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 13:38:29 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-190751-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 13:41:22 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 16 Sep 2024 13:40:31 +0000   Mon, 16 Sep 2024 13:42:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 16 Sep 2024 13:40:31 +0000   Mon, 16 Sep 2024 13:42:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 16 Sep 2024 13:40:31 +0000   Mon, 16 Sep 2024 13:42:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 16 Sep 2024 13:40:31 +0000   Mon, 16 Sep 2024 13:42:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.192
	  Hostname:    ha-190751-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 550acf86555f4901ac21dc9dc8bbc28f
	  System UUID:                550acf86-555f-4901-ac21-dc9dc8bbc28f
	  Boot ID:                    fb4d2fc9-b82a-43f9-90cb-6b91307d8d37
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wnt5k                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 etcd-ha-190751-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m14s
	  kube-system                 kindnet-qfl9j                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m16s
	  kube-system                 kube-apiserver-ha-190751-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m14s
	  kube-system                 kube-controller-manager-ha-190751-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-proxy-24q9n                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-scheduler-ha-190751-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-vip-ha-190751-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m11s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m16s (x8 over 5m16s)  kubelet          Node ha-190751-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m16s (x8 over 5m16s)  kubelet          Node ha-190751-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m16s (x7 over 5m16s)  kubelet          Node ha-190751-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m15s                  node-controller  Node ha-190751-m02 event: Registered Node ha-190751-m02 in Controller
	  Normal  RegisteredNode           5m7s                   node-controller  Node ha-190751-m02 event: Registered Node ha-190751-m02 in Controller
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-190751-m02 event: Registered Node ha-190751-m02 in Controller
	  Normal  NodeNotReady             100s                   node-controller  Node ha-190751-m02 status is now: NodeNotReady
	
	
	Name:               ha-190751-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-190751-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86
	                    minikube.k8s.io/name=ha-190751
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T13_39_45_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 13:39:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-190751-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 13:43:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 13:40:12 +0000   Mon, 16 Sep 2024 13:39:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 13:40:12 +0000   Mon, 16 Sep 2024 13:39:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 13:40:12 +0000   Mon, 16 Sep 2024 13:39:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 13:40:12 +0000   Mon, 16 Sep 2024 13:40:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.134
	  Hostname:    ha-190751-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a371c754a93a41bd8e51ba43403aed52
	  System UUID:                a371c754-a93a-41bd-8e51-ba43403aed52
	  Boot ID:                    1fe05264-4a42-4111-91d4-db1d24d6b79c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-w6sc6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 etcd-ha-190751-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m1s
	  kube-system                 kindnet-s7765                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m3s
	  kube-system                 kube-apiserver-ha-190751-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-controller-manager-ha-190751-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-proxy-9lpwl                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-scheduler-ha-190751-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-vip-ha-190751-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m57s                kube-proxy       
	  Normal  Starting                 4m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m3s (x8 over 4m3s)  kubelet          Node ha-190751-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m3s (x8 over 4m3s)  kubelet          Node ha-190751-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m3s (x7 over 4m3s)  kubelet          Node ha-190751-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-190751-m03 event: Registered Node ha-190751-m03 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-190751-m03 event: Registered Node ha-190751-m03 in Controller
	  Normal  RegisteredNode           3m54s                node-controller  Node ha-190751-m03 event: Registered Node ha-190751-m03 in Controller
	
	
	Name:               ha-190751-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-190751-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86
	                    minikube.k8s.io/name=ha-190751
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T13_40_46_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 13:40:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-190751-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 13:43:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 13:41:16 +0000   Mon, 16 Sep 2024 13:40:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 13:41:16 +0000   Mon, 16 Sep 2024 13:40:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 13:41:16 +0000   Mon, 16 Sep 2024 13:40:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 13:41:16 +0000   Mon, 16 Sep 2024 13:41:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-190751-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 99332c0e26304b3097b2fce26060f009
	  System UUID:                99332c0e-2630-4b30-97b2-fce26060f009
	  Boot ID:                    64cf2850-6571-40c7-816a-9ba47cc07e90
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-9nmfv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      2m59s
	  kube-system                 kube-proxy-tk6f6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                 From             Message
	  ----    ------                   ----                ----             -------
	  Normal  Starting                 2m54s               kube-proxy       
	  Normal  NodeAllocatableEnforced  3m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m59s (x2 over 3m)  kubelet          Node ha-190751-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m59s (x2 over 3m)  kubelet          Node ha-190751-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m59s (x2 over 3m)  kubelet          Node ha-190751-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m58s               node-controller  Node ha-190751-m04 event: Registered Node ha-190751-m04 in Controller
	  Normal  RegisteredNode           2m57s               node-controller  Node ha-190751-m04 event: Registered Node ha-190751-m04 in Controller
	  Normal  RegisteredNode           2m55s               node-controller  Node ha-190751-m04 event: Registered Node ha-190751-m04 in Controller
	  Normal  NodeReady                2m41s               kubelet          Node ha-190751-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep16 13:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050855] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039668] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.744321] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.392336] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.569791] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.291459] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.062528] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065864] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.157574] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.135658] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.243263] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.876209] systemd-fstab-generator[753]: Ignoring "noauto" option for root device
	[  +4.159219] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.061484] kauditd_printk_skb: 158 callbacks suppressed
	[ +10.191933] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.087738] kauditd_printk_skb: 79 callbacks suppressed
	[Sep16 13:38] kauditd_printk_skb: 69 callbacks suppressed
	[ +12.548550] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [0cd93f6d25b96fcafeadbe4368203439d003e6e60832e2405318039bac48cd90] <==
	{"level":"warn","ts":"2024-09-16T13:43:45.237560Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:43:45.238948Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:43:45.269475Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:43:45.277526Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:43:45.281473Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:43:45.290648Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:43:45.301022Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:43:45.303683Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:43:45.310160Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:43:45.313468Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:43:45.316555Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:43:45.322054Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:43:45.328370Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:43:45.334426Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:43:45.337277Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:43:45.340600Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:43:45.346152Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:43:45.353301Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:43:45.360516Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:43:45.363420Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:43:45.366333Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:43:45.370009Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:43:45.376064Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:43:45.381711Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:43:45.399253Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 13:43:45 up 6 min,  0 users,  load average: 0.46, 0.36, 0.20
	Linux ha-190751 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [876c9f45c384802a996dd22d917975d86b875cbde33520b6bfb8ec6f84b39629] <==
	I0916 13:43:13.330785       1 main.go:322] Node ha-190751-m04 has CIDR [10.244.3.0/24] 
	I0916 13:43:23.328560       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0916 13:43:23.328713       1 main.go:322] Node ha-190751-m04 has CIDR [10.244.3.0/24] 
	I0916 13:43:23.328990       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0916 13:43:23.329058       1 main.go:299] handling current node
	I0916 13:43:23.329134       1 main.go:295] Handling node with IPs: map[192.168.39.192:{}]
	I0916 13:43:23.329175       1 main.go:322] Node ha-190751-m02 has CIDR [10.244.1.0/24] 
	I0916 13:43:23.329307       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0916 13:43:23.329369       1 main.go:322] Node ha-190751-m03 has CIDR [10.244.2.0/24] 
	I0916 13:43:33.330402       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0916 13:43:33.330485       1 main.go:322] Node ha-190751-m04 has CIDR [10.244.3.0/24] 
	I0916 13:43:33.330629       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0916 13:43:33.330636       1 main.go:299] handling current node
	I0916 13:43:33.330650       1 main.go:295] Handling node with IPs: map[192.168.39.192:{}]
	I0916 13:43:33.330671       1 main.go:322] Node ha-190751-m02 has CIDR [10.244.1.0/24] 
	I0916 13:43:33.330732       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0916 13:43:33.330753       1 main.go:322] Node ha-190751-m03 has CIDR [10.244.2.0/24] 
	I0916 13:43:43.328988       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0916 13:43:43.329191       1 main.go:299] handling current node
	I0916 13:43:43.329232       1 main.go:295] Handling node with IPs: map[192.168.39.192:{}]
	I0916 13:43:43.329259       1 main.go:322] Node ha-190751-m02 has CIDR [10.244.1.0/24] 
	I0916 13:43:43.329509       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0916 13:43:43.329547       1 main.go:322] Node ha-190751-m03 has CIDR [10.244.2.0/24] 
	I0916 13:43:43.329644       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0916 13:43:43.329673       1 main.go:322] Node ha-190751-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2cb375fdf3e21c70ce4d6d7afaeb7e323643bddc06490de3e9e9973f9817f85b] <==
	W0916 13:37:35.613917       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.94]
	I0916 13:37:35.615132       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 13:37:35.624415       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 13:37:35.827166       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 13:37:39.689217       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 13:37:39.701776       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 13:37:39.710054       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 13:37:41.127261       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 13:37:41.327290       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0916 13:40:11.257358       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39950: use of closed network connection
	E0916 13:40:11.454111       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39978: use of closed network connection
	E0916 13:40:11.636499       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39984: use of closed network connection
	E0916 13:40:11.840283       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39996: use of closed network connection
	E0916 13:40:12.021189       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40006: use of closed network connection
	E0916 13:40:12.205489       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40024: use of closed network connection
	E0916 13:40:12.384741       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40054: use of closed network connection
	E0916 13:40:12.574388       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40072: use of closed network connection
	E0916 13:40:12.749001       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37174: use of closed network connection
	E0916 13:40:13.066042       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37194: use of closed network connection
	E0916 13:40:13.245757       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37222: use of closed network connection
	E0916 13:40:13.436949       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37248: use of closed network connection
	E0916 13:40:13.616454       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37268: use of closed network connection
	E0916 13:40:13.824166       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37286: use of closed network connection
	E0916 13:40:14.008342       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37304: use of closed network connection
	W0916 13:41:35.625317       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.134 192.168.39.94]
	
	
	==> kube-controller-manager [13c8d0e1fdcbee87a87cace216d5dc79bc82e8045e7d582390ca41efdbcadcad] <==
	I0916 13:40:46.061918       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-190751-m04" podCIDRs=["10.244.3.0/24"]
	I0916 13:40:46.063041       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:40:46.063224       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:40:46.072623       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:40:46.321470       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:40:46.702227       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:40:47.082576       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:40:48.649251       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:40:48.708573       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:40:50.920633       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-190751-m04"
	I0916 13:40:50.921471       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:40:50.942281       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:40:56.091517       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:41:04.673759       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-190751-m04"
	I0916 13:41:04.674036       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:41:04.689464       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:41:05.935484       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:41:16.221567       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:42:05.964385       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m02"
	I0916 13:42:05.964586       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-190751-m04"
	I0916 13:42:05.985678       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m02"
	I0916 13:42:06.119405       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="66.494924ms"
	I0916 13:42:06.120105       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="89.05µs"
	I0916 13:42:07.101541       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m02"
	I0916 13:42:11.166665       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m02"
	
	
	==> kube-proxy [d2fb4efd07b928023ce922b08d4d29585e3080441cdb212649ac1338243874ee] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 13:37:43.541653       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 13:37:43.561090       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.94"]
	E0916 13:37:43.561216       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 13:37:43.596546       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 13:37:43.596577       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 13:37:43.596598       1 server_linux.go:169] "Using iptables Proxier"
	I0916 13:37:43.600422       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 13:37:43.600713       1 server.go:483] "Version info" version="v1.31.1"
	I0916 13:37:43.600739       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 13:37:43.602773       1 config.go:199] "Starting service config controller"
	I0916 13:37:43.603076       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 13:37:43.603330       1 config.go:105] "Starting endpoint slice config controller"
	I0916 13:37:43.603354       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 13:37:43.604127       1 config.go:328] "Starting node config controller"
	I0916 13:37:43.604167       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 13:37:43.703958       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 13:37:43.704048       1 shared_informer.go:320] Caches are synced for service config
	I0916 13:37:43.707176       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3d2fdc916e364191824e8eeeeebd2bd4bde311ec642553730ff1fa83d5ae6b3c] <==
	W0916 13:37:35.187363       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 13:37:35.187418       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 13:37:35.189451       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 13:37:35.189536       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 13:37:35.192628       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 13:37:35.192665       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 13:37:35.197996       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 13:37:35.198037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 13:37:35.202047       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 13:37:35.202088       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 13:37:35.205639       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 13:37:35.205680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 13:37:35.218014       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 13:37:35.218057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 13:37:35.232785       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 13:37:35.232941       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 13:37:36.647896       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 13:40:46.111447       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-v4ngc\": pod kube-proxy-v4ngc is already assigned to node \"ha-190751-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-v4ngc" node="ha-190751-m04"
	E0916 13:40:46.111635       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1bfac972-00f2-440b-8577-132ebf2ef8fa(kube-system/kube-proxy-v4ngc) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-v4ngc"
	E0916 13:40:46.111674       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-v4ngc\": pod kube-proxy-v4ngc is already assigned to node \"ha-190751-m04\"" pod="kube-system/kube-proxy-v4ngc"
	I0916 13:40:46.111701       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-v4ngc" node="ha-190751-m04"
	E0916 13:40:46.136509       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-9nmfv\": pod kindnet-9nmfv is already assigned to node \"ha-190751-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-9nmfv" node="ha-190751-m04"
	E0916 13:40:46.136581       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a53af4e2-ffdc-4e32-8f97-f0b2684145be(kube-system/kindnet-9nmfv) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-9nmfv"
	E0916 13:40:46.136599       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-9nmfv\": pod kindnet-9nmfv is already assigned to node \"ha-190751-m04\"" pod="kube-system/kindnet-9nmfv"
	I0916 13:40:46.136617       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-9nmfv" node="ha-190751-m04"
	
	
	==> kubelet <==
	Sep 16 13:42:29 ha-190751 kubelet[1315]: E0916 13:42:29.733365    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494149732071970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:42:39 ha-190751 kubelet[1315]: E0916 13:42:39.652815    1315 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 13:42:39 ha-190751 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 13:42:39 ha-190751 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 13:42:39 ha-190751 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 13:42:39 ha-190751 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 13:42:39 ha-190751 kubelet[1315]: E0916 13:42:39.734930    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494159734249380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:42:39 ha-190751 kubelet[1315]: E0916 13:42:39.734983    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494159734249380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:42:49 ha-190751 kubelet[1315]: E0916 13:42:49.736122    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494169735728638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:42:49 ha-190751 kubelet[1315]: E0916 13:42:49.736152    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494169735728638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:42:59 ha-190751 kubelet[1315]: E0916 13:42:59.737464    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494179737167298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:42:59 ha-190751 kubelet[1315]: E0916 13:42:59.737550    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494179737167298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:43:09 ha-190751 kubelet[1315]: E0916 13:43:09.738645    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494189738384757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:43:09 ha-190751 kubelet[1315]: E0916 13:43:09.739025    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494189738384757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:43:19 ha-190751 kubelet[1315]: E0916 13:43:19.740916    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494199740215330,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:43:19 ha-190751 kubelet[1315]: E0916 13:43:19.741381    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494199740215330,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:43:29 ha-190751 kubelet[1315]: E0916 13:43:29.746273    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494209745740647,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:43:29 ha-190751 kubelet[1315]: E0916 13:43:29.746714    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494209745740647,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:43:39 ha-190751 kubelet[1315]: E0916 13:43:39.648884    1315 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 13:43:39 ha-190751 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 13:43:39 ha-190751 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 13:43:39 ha-190751 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 13:43:39 ha-190751 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 13:43:39 ha-190751 kubelet[1315]: E0916 13:43:39.749540    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494219749060015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:43:39 ha-190751 kubelet[1315]: E0916 13:43:39.749585    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494219749060015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-190751 -n ha-190751
helpers_test.go:261: (dbg) Run:  kubectl --context ha-190751 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (47.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-190751 status -v=7 --alsologtostderr: exit status 3 (3.192247006s)

                                                
                                                
-- stdout --
	ha-190751
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-190751-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-190751-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-190751-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 13:43:49.942594  739865 out.go:345] Setting OutFile to fd 1 ...
	I0916 13:43:49.942727  739865 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:43:49.942737  739865 out.go:358] Setting ErrFile to fd 2...
	I0916 13:43:49.942742  739865 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:43:49.942916  739865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
	I0916 13:43:49.943074  739865 out.go:352] Setting JSON to false
	I0916 13:43:49.943108  739865 mustload.go:65] Loading cluster: ha-190751
	I0916 13:43:49.943231  739865 notify.go:220] Checking for updates...
	I0916 13:43:49.943664  739865 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:43:49.943685  739865 status.go:255] checking status of ha-190751 ...
	I0916 13:43:49.944170  739865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:49.944234  739865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:49.959760  739865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43339
	I0916 13:43:49.960203  739865 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:49.960791  739865 main.go:141] libmachine: Using API Version  1
	I0916 13:43:49.960812  739865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:49.961153  739865 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:49.961365  739865 main.go:141] libmachine: (ha-190751) Calling .GetState
	I0916 13:43:49.962993  739865 status.go:330] ha-190751 host status = "Running" (err=<nil>)
	I0916 13:43:49.963011  739865 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:43:49.963303  739865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:49.963346  739865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:49.978056  739865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33425
	I0916 13:43:49.978419  739865 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:49.978813  739865 main.go:141] libmachine: Using API Version  1
	I0916 13:43:49.978829  739865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:49.979147  739865 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:49.979332  739865 main.go:141] libmachine: (ha-190751) Calling .GetIP
	I0916 13:43:49.981943  739865 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:43:49.982399  739865 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:43:49.982426  739865 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:43:49.982587  739865 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:43:49.982872  739865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:49.982906  739865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:49.996857  739865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46593
	I0916 13:43:49.997269  739865 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:49.997818  739865 main.go:141] libmachine: Using API Version  1
	I0916 13:43:49.997844  739865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:49.998408  739865 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:49.998585  739865 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:43:49.998775  739865 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:43:49.998814  739865 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:43:50.001350  739865 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:43:50.001755  739865 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:43:50.001788  739865 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:43:50.001902  739865 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:43:50.002084  739865 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:43:50.002206  739865 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:43:50.002307  739865 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:43:50.086411  739865 ssh_runner.go:195] Run: systemctl --version
	I0916 13:43:50.092371  739865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:43:50.107240  739865 kubeconfig.go:125] found "ha-190751" server: "https://192.168.39.254:8443"
	I0916 13:43:50.107274  739865 api_server.go:166] Checking apiserver status ...
	I0916 13:43:50.107306  739865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 13:43:50.121964  739865 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup
	W0916 13:43:50.130990  739865 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 13:43:50.131042  739865 ssh_runner.go:195] Run: ls
	I0916 13:43:50.135358  739865 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 13:43:50.139499  739865 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 13:43:50.139518  739865 status.go:422] ha-190751 apiserver status = Running (err=<nil>)
	I0916 13:43:50.139529  739865 status.go:257] ha-190751 status: &{Name:ha-190751 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 13:43:50.139544  739865 status.go:255] checking status of ha-190751-m02 ...
	I0916 13:43:50.139847  739865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:50.139878  739865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:50.156192  739865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36173
	I0916 13:43:50.156600  739865 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:50.157144  739865 main.go:141] libmachine: Using API Version  1
	I0916 13:43:50.157165  739865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:50.157486  739865 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:50.157686  739865 main.go:141] libmachine: (ha-190751-m02) Calling .GetState
	I0916 13:43:50.159197  739865 status.go:330] ha-190751-m02 host status = "Running" (err=<nil>)
	I0916 13:43:50.159216  739865 host.go:66] Checking if "ha-190751-m02" exists ...
	I0916 13:43:50.159546  739865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:50.159612  739865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:50.173854  739865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37241
	I0916 13:43:50.174306  739865 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:50.174697  739865 main.go:141] libmachine: Using API Version  1
	I0916 13:43:50.174730  739865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:50.175063  739865 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:50.175257  739865 main.go:141] libmachine: (ha-190751-m02) Calling .GetIP
	I0916 13:43:50.177769  739865 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:43:50.178154  739865 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:43:50.178191  739865 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:43:50.178283  739865 host.go:66] Checking if "ha-190751-m02" exists ...
	I0916 13:43:50.178580  739865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:50.178623  739865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:50.192855  739865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33373
	I0916 13:43:50.193259  739865 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:50.193736  739865 main.go:141] libmachine: Using API Version  1
	I0916 13:43:50.193755  739865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:50.194038  739865 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:50.194204  739865 main.go:141] libmachine: (ha-190751-m02) Calling .DriverName
	I0916 13:43:50.194348  739865 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:43:50.194367  739865 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:43:50.196986  739865 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:43:50.197436  739865 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:43:50.197461  739865 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:43:50.197643  739865 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:43:50.197796  739865 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:43:50.197932  739865 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:43:50.198061  739865 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/id_rsa Username:docker}
	W0916 13:43:52.741972  739865 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.192:22: connect: no route to host
	W0916 13:43:52.742101  739865 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.192:22: connect: no route to host
	E0916 13:43:52.742123  739865 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.192:22: connect: no route to host
	I0916 13:43:52.742136  739865 status.go:257] ha-190751-m02 status: &{Name:ha-190751-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 13:43:52.742175  739865 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.192:22: connect: no route to host
	I0916 13:43:52.742187  739865 status.go:255] checking status of ha-190751-m03 ...
	I0916 13:43:52.742530  739865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:52.742576  739865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:52.757814  739865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40475
	I0916 13:43:52.758319  739865 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:52.758808  739865 main.go:141] libmachine: Using API Version  1
	I0916 13:43:52.758829  739865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:52.759170  739865 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:52.759344  739865 main.go:141] libmachine: (ha-190751-m03) Calling .GetState
	I0916 13:43:52.760849  739865 status.go:330] ha-190751-m03 host status = "Running" (err=<nil>)
	I0916 13:43:52.760868  739865 host.go:66] Checking if "ha-190751-m03" exists ...
	I0916 13:43:52.761153  739865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:52.761190  739865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:52.775686  739865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44431
	I0916 13:43:52.776015  739865 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:52.776398  739865 main.go:141] libmachine: Using API Version  1
	I0916 13:43:52.776412  739865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:52.776730  739865 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:52.776900  739865 main.go:141] libmachine: (ha-190751-m03) Calling .GetIP
	I0916 13:43:52.779445  739865 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:43:52.779879  739865 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:43:52.779904  739865 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:43:52.780036  739865 host.go:66] Checking if "ha-190751-m03" exists ...
	I0916 13:43:52.780429  739865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:52.780469  739865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:52.795034  739865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45467
	I0916 13:43:52.795472  739865 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:52.795919  739865 main.go:141] libmachine: Using API Version  1
	I0916 13:43:52.795941  739865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:52.796233  739865 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:52.796419  739865 main.go:141] libmachine: (ha-190751-m03) Calling .DriverName
	I0916 13:43:52.796607  739865 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:43:52.796627  739865 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:43:52.799108  739865 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:43:52.799592  739865 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:43:52.799619  739865 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:43:52.799771  739865 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:43:52.799946  739865 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:43:52.800099  739865 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:43:52.800219  739865 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/id_rsa Username:docker}
	I0916 13:43:52.881390  739865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:43:52.897435  739865 kubeconfig.go:125] found "ha-190751" server: "https://192.168.39.254:8443"
	I0916 13:43:52.897468  739865 api_server.go:166] Checking apiserver status ...
	I0916 13:43:52.897510  739865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 13:43:52.912986  739865 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1390/cgroup
	W0916 13:43:52.923582  739865 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1390/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 13:43:52.923638  739865 ssh_runner.go:195] Run: ls
	I0916 13:43:52.929420  739865 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 13:43:52.934020  739865 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 13:43:52.934040  739865 status.go:422] ha-190751-m03 apiserver status = Running (err=<nil>)
	I0916 13:43:52.934049  739865 status.go:257] ha-190751-m03 status: &{Name:ha-190751-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 13:43:52.934065  739865 status.go:255] checking status of ha-190751-m04 ...
	I0916 13:43:52.934394  739865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:52.934434  739865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:52.949346  739865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44533
	I0916 13:43:52.949771  739865 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:52.950262  739865 main.go:141] libmachine: Using API Version  1
	I0916 13:43:52.950283  739865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:52.950563  739865 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:52.950726  739865 main.go:141] libmachine: (ha-190751-m04) Calling .GetState
	I0916 13:43:52.952145  739865 status.go:330] ha-190751-m04 host status = "Running" (err=<nil>)
	I0916 13:43:52.952163  739865 host.go:66] Checking if "ha-190751-m04" exists ...
	I0916 13:43:52.952462  739865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:52.952497  739865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:52.967292  739865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33771
	I0916 13:43:52.967661  739865 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:52.968132  739865 main.go:141] libmachine: Using API Version  1
	I0916 13:43:52.968154  739865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:52.968496  739865 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:52.968698  739865 main.go:141] libmachine: (ha-190751-m04) Calling .GetIP
	I0916 13:43:52.971170  739865 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:43:52.971543  739865 main.go:141] libmachine: (ha-190751-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c5:44", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:40:29 +0000 UTC Type:0 Mac:52:54:00:46:c5:44 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-190751-m04 Clientid:01:52:54:00:46:c5:44}
	I0916 13:43:52.971576  739865 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:43:52.971732  739865 host.go:66] Checking if "ha-190751-m04" exists ...
	I0916 13:43:52.972017  739865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:52.972057  739865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:52.986656  739865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44997
	I0916 13:43:52.987067  739865 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:52.987541  739865 main.go:141] libmachine: Using API Version  1
	I0916 13:43:52.987566  739865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:52.987889  739865 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:52.988068  739865 main.go:141] libmachine: (ha-190751-m04) Calling .DriverName
	I0916 13:43:52.988233  739865 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:43:52.988252  739865 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHHostname
	I0916 13:43:52.990550  739865 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:43:52.990938  739865 main.go:141] libmachine: (ha-190751-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c5:44", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:40:29 +0000 UTC Type:0 Mac:52:54:00:46:c5:44 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-190751-m04 Clientid:01:52:54:00:46:c5:44}
	I0916 13:43:52.990976  739865 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:43:52.991088  739865 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHPort
	I0916 13:43:52.991230  739865 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHKeyPath
	I0916 13:43:52.991366  739865 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHUsername
	I0916 13:43:52.991477  739865 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m04/id_rsa Username:docker}
	I0916 13:43:53.073504  739865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:43:53.088231  739865 status.go:257] ha-190751-m04 status: &{Name:ha-190751-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-190751 status -v=7 --alsologtostderr: exit status 3 (5.322482305s)

                                                
                                                
-- stdout --
	ha-190751
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-190751-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-190751-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-190751-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 13:43:53.955534  739949 out.go:345] Setting OutFile to fd 1 ...
	I0916 13:43:53.955661  739949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:43:53.955672  739949 out.go:358] Setting ErrFile to fd 2...
	I0916 13:43:53.955680  739949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:43:53.955869  739949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
	I0916 13:43:53.956053  739949 out.go:352] Setting JSON to false
	I0916 13:43:53.956094  739949 mustload.go:65] Loading cluster: ha-190751
	I0916 13:43:53.956180  739949 notify.go:220] Checking for updates...
	I0916 13:43:53.956564  739949 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:43:53.956582  739949 status.go:255] checking status of ha-190751 ...
	I0916 13:43:53.957043  739949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:53.957091  739949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:53.975092  739949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40449
	I0916 13:43:53.975509  739949 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:53.976061  739949 main.go:141] libmachine: Using API Version  1
	I0916 13:43:53.976094  739949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:53.976439  739949 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:53.976624  739949 main.go:141] libmachine: (ha-190751) Calling .GetState
	I0916 13:43:53.978111  739949 status.go:330] ha-190751 host status = "Running" (err=<nil>)
	I0916 13:43:53.978129  739949 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:43:53.978433  739949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:53.978476  739949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:53.993029  739949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36533
	I0916 13:43:53.993423  739949 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:53.993923  739949 main.go:141] libmachine: Using API Version  1
	I0916 13:43:53.993942  739949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:53.994274  739949 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:53.994424  739949 main.go:141] libmachine: (ha-190751) Calling .GetIP
	I0916 13:43:53.997216  739949 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:43:53.997579  739949 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:43:53.997597  739949 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:43:53.997788  739949 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:43:53.998074  739949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:53.998119  739949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:54.013995  739949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34861
	I0916 13:43:54.014410  739949 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:54.014864  739949 main.go:141] libmachine: Using API Version  1
	I0916 13:43:54.014888  739949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:54.015186  739949 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:54.015394  739949 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:43:54.015582  739949 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:43:54.015609  739949 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:43:54.018501  739949 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:43:54.018873  739949 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:43:54.018896  739949 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:43:54.019092  739949 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:43:54.019305  739949 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:43:54.019472  739949 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:43:54.019595  739949 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:43:54.101826  739949 ssh_runner.go:195] Run: systemctl --version
	I0916 13:43:54.108360  739949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:43:54.124679  739949 kubeconfig.go:125] found "ha-190751" server: "https://192.168.39.254:8443"
	I0916 13:43:54.124723  739949 api_server.go:166] Checking apiserver status ...
	I0916 13:43:54.124768  739949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 13:43:54.138803  739949 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup
	W0916 13:43:54.148231  739949 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 13:43:54.148294  739949 ssh_runner.go:195] Run: ls
	I0916 13:43:54.152272  739949 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 13:43:54.158414  739949 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 13:43:54.158433  739949 status.go:422] ha-190751 apiserver status = Running (err=<nil>)
	I0916 13:43:54.158443  739949 status.go:257] ha-190751 status: &{Name:ha-190751 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 13:43:54.158461  739949 status.go:255] checking status of ha-190751-m02 ...
	I0916 13:43:54.158863  739949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:54.158903  739949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:54.173896  739949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39321
	I0916 13:43:54.174408  739949 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:54.174918  739949 main.go:141] libmachine: Using API Version  1
	I0916 13:43:54.174940  739949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:54.175281  739949 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:54.175467  739949 main.go:141] libmachine: (ha-190751-m02) Calling .GetState
	I0916 13:43:54.177094  739949 status.go:330] ha-190751-m02 host status = "Running" (err=<nil>)
	I0916 13:43:54.177115  739949 host.go:66] Checking if "ha-190751-m02" exists ...
	I0916 13:43:54.177450  739949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:54.177500  739949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:54.193326  739949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38087
	I0916 13:43:54.193727  739949 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:54.194158  739949 main.go:141] libmachine: Using API Version  1
	I0916 13:43:54.194187  739949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:54.194525  739949 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:54.194695  739949 main.go:141] libmachine: (ha-190751-m02) Calling .GetIP
	I0916 13:43:54.197362  739949 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:43:54.197801  739949 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:43:54.197825  739949 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:43:54.197962  739949 host.go:66] Checking if "ha-190751-m02" exists ...
	I0916 13:43:54.198354  739949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:54.198393  739949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:54.213566  739949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44773
	I0916 13:43:54.214050  739949 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:54.214598  739949 main.go:141] libmachine: Using API Version  1
	I0916 13:43:54.214617  739949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:54.214993  739949 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:54.215140  739949 main.go:141] libmachine: (ha-190751-m02) Calling .DriverName
	I0916 13:43:54.215310  739949 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:43:54.215335  739949 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:43:54.218039  739949 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:43:54.218469  739949 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:43:54.218496  739949 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:43:54.218639  739949 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:43:54.218810  739949 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:43:54.218948  739949 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:43:54.219059  739949 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/id_rsa Username:docker}
	W0916 13:43:55.814034  739949 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.192:22: connect: no route to host
	I0916 13:43:55.814091  739949 retry.go:31] will retry after 219.008259ms: dial tcp 192.168.39.192:22: connect: no route to host
	W0916 13:43:58.885978  739949 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.192:22: connect: no route to host
	W0916 13:43:58.886091  739949 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.192:22: connect: no route to host
	E0916 13:43:58.886112  739949 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.192:22: connect: no route to host
	I0916 13:43:58.886122  739949 status.go:257] ha-190751-m02 status: &{Name:ha-190751-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 13:43:58.886152  739949 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.192:22: connect: no route to host
	I0916 13:43:58.886160  739949 status.go:255] checking status of ha-190751-m03 ...
	I0916 13:43:58.886595  739949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:58.886650  739949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:58.901746  739949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37161
	I0916 13:43:58.902168  739949 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:58.902764  739949 main.go:141] libmachine: Using API Version  1
	I0916 13:43:58.902780  739949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:58.903129  739949 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:58.903310  739949 main.go:141] libmachine: (ha-190751-m03) Calling .GetState
	I0916 13:43:58.904957  739949 status.go:330] ha-190751-m03 host status = "Running" (err=<nil>)
	I0916 13:43:58.904979  739949 host.go:66] Checking if "ha-190751-m03" exists ...
	I0916 13:43:58.905267  739949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:58.905306  739949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:58.920325  739949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35343
	I0916 13:43:58.920813  739949 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:58.921321  739949 main.go:141] libmachine: Using API Version  1
	I0916 13:43:58.921343  739949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:58.921635  739949 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:58.921813  739949 main.go:141] libmachine: (ha-190751-m03) Calling .GetIP
	I0916 13:43:58.924149  739949 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:43:58.924590  739949 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:43:58.924606  739949 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:43:58.924761  739949 host.go:66] Checking if "ha-190751-m03" exists ...
	I0916 13:43:58.925136  739949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:58.925195  739949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:58.939644  739949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41375
	I0916 13:43:58.940026  739949 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:58.940455  739949 main.go:141] libmachine: Using API Version  1
	I0916 13:43:58.940473  739949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:58.940769  739949 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:58.940944  739949 main.go:141] libmachine: (ha-190751-m03) Calling .DriverName
	I0916 13:43:58.941113  739949 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:43:58.941138  739949 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:43:58.943711  739949 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:43:58.944129  739949 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:43:58.944155  739949 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:43:58.944267  739949 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:43:58.944421  739949 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:43:58.944568  739949 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:43:58.944706  739949 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/id_rsa Username:docker}
	I0916 13:43:59.025998  739949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:43:59.042559  739949 kubeconfig.go:125] found "ha-190751" server: "https://192.168.39.254:8443"
	I0916 13:43:59.042597  739949 api_server.go:166] Checking apiserver status ...
	I0916 13:43:59.042646  739949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 13:43:59.056482  739949 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1390/cgroup
	W0916 13:43:59.065891  739949 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1390/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 13:43:59.065926  739949 ssh_runner.go:195] Run: ls
	I0916 13:43:59.070061  739949 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 13:43:59.074332  739949 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 13:43:59.074353  739949 status.go:422] ha-190751-m03 apiserver status = Running (err=<nil>)
	I0916 13:43:59.074364  739949 status.go:257] ha-190751-m03 status: &{Name:ha-190751-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 13:43:59.074383  739949 status.go:255] checking status of ha-190751-m04 ...
	I0916 13:43:59.074668  739949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:59.074708  739949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:59.089771  739949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34061
	I0916 13:43:59.090241  739949 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:59.090720  739949 main.go:141] libmachine: Using API Version  1
	I0916 13:43:59.090739  739949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:59.091084  739949 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:59.091269  739949 main.go:141] libmachine: (ha-190751-m04) Calling .GetState
	I0916 13:43:59.092725  739949 status.go:330] ha-190751-m04 host status = "Running" (err=<nil>)
	I0916 13:43:59.092743  739949 host.go:66] Checking if "ha-190751-m04" exists ...
	I0916 13:43:59.093020  739949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:59.093052  739949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:59.108020  739949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39101
	I0916 13:43:59.108449  739949 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:59.108946  739949 main.go:141] libmachine: Using API Version  1
	I0916 13:43:59.108965  739949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:59.109347  739949 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:59.109551  739949 main.go:141] libmachine: (ha-190751-m04) Calling .GetIP
	I0916 13:43:59.112738  739949 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:43:59.113170  739949 main.go:141] libmachine: (ha-190751-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c5:44", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:40:29 +0000 UTC Type:0 Mac:52:54:00:46:c5:44 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-190751-m04 Clientid:01:52:54:00:46:c5:44}
	I0916 13:43:59.113196  739949 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:43:59.113490  739949 host.go:66] Checking if "ha-190751-m04" exists ...
	I0916 13:43:59.113964  739949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:43:59.114017  739949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:43:59.129873  739949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33131
	I0916 13:43:59.130205  739949 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:43:59.130657  739949 main.go:141] libmachine: Using API Version  1
	I0916 13:43:59.130676  739949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:43:59.130989  739949 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:43:59.131191  739949 main.go:141] libmachine: (ha-190751-m04) Calling .DriverName
	I0916 13:43:59.131369  739949 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:43:59.131388  739949 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHHostname
	I0916 13:43:59.134189  739949 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:43:59.134594  739949 main.go:141] libmachine: (ha-190751-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c5:44", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:40:29 +0000 UTC Type:0 Mac:52:54:00:46:c5:44 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-190751-m04 Clientid:01:52:54:00:46:c5:44}
	I0916 13:43:59.134623  739949 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:43:59.134729  739949 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHPort
	I0916 13:43:59.134881  739949 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHKeyPath
	I0916 13:43:59.135005  739949 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHUsername
	I0916 13:43:59.135150  739949 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m04/id_rsa Username:docker}
	I0916 13:43:59.221129  739949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:43:59.235119  739949 status.go:257] ha-190751-m04 status: &{Name:ha-190751-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-190751 status -v=7 --alsologtostderr: exit status 3 (4.453548228s)

                                                
                                                
-- stdout --
	ha-190751
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-190751-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-190751-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-190751-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 13:44:01.166360  740065 out.go:345] Setting OutFile to fd 1 ...
	I0916 13:44:01.166598  740065 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:44:01.166607  740065 out.go:358] Setting ErrFile to fd 2...
	I0916 13:44:01.166611  740065 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:44:01.166778  740065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
	I0916 13:44:01.166933  740065 out.go:352] Setting JSON to false
	I0916 13:44:01.166967  740065 mustload.go:65] Loading cluster: ha-190751
	I0916 13:44:01.167070  740065 notify.go:220] Checking for updates...
	I0916 13:44:01.167350  740065 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:44:01.167366  740065 status.go:255] checking status of ha-190751 ...
	I0916 13:44:01.167789  740065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:01.167839  740065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:01.187382  740065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37129
	I0916 13:44:01.187948  740065 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:01.188645  740065 main.go:141] libmachine: Using API Version  1
	I0916 13:44:01.188669  740065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:01.189122  740065 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:01.189352  740065 main.go:141] libmachine: (ha-190751) Calling .GetState
	I0916 13:44:01.191080  740065 status.go:330] ha-190751 host status = "Running" (err=<nil>)
	I0916 13:44:01.191098  740065 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:44:01.191425  740065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:01.191477  740065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:01.206286  740065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37131
	I0916 13:44:01.206771  740065 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:01.207224  740065 main.go:141] libmachine: Using API Version  1
	I0916 13:44:01.207243  740065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:01.207609  740065 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:01.207809  740065 main.go:141] libmachine: (ha-190751) Calling .GetIP
	I0916 13:44:01.210646  740065 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:44:01.211086  740065 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:44:01.211112  740065 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:44:01.211257  740065 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:44:01.211545  740065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:01.211592  740065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:01.227257  740065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46815
	I0916 13:44:01.227773  740065 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:01.228265  740065 main.go:141] libmachine: Using API Version  1
	I0916 13:44:01.228290  740065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:01.228590  740065 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:01.228765  740065 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:44:01.228925  740065 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:44:01.228943  740065 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:44:01.231515  740065 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:44:01.231949  740065 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:44:01.231987  740065 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:44:01.232127  740065 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:44:01.232299  740065 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:44:01.232430  740065 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:44:01.232558  740065 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:44:01.323323  740065 ssh_runner.go:195] Run: systemctl --version
	I0916 13:44:01.331291  740065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:44:01.346982  740065 kubeconfig.go:125] found "ha-190751" server: "https://192.168.39.254:8443"
	I0916 13:44:01.347031  740065 api_server.go:166] Checking apiserver status ...
	I0916 13:44:01.347083  740065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 13:44:01.364601  740065 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup
	W0916 13:44:01.374477  740065 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 13:44:01.374536  740065 ssh_runner.go:195] Run: ls
	I0916 13:44:01.379476  740065 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 13:44:01.385501  740065 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 13:44:01.385526  740065 status.go:422] ha-190751 apiserver status = Running (err=<nil>)
	I0916 13:44:01.385536  740065 status.go:257] ha-190751 status: &{Name:ha-190751 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 13:44:01.385557  740065 status.go:255] checking status of ha-190751-m02 ...
	I0916 13:44:01.385894  740065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:01.385936  740065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:01.401376  740065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36037
	I0916 13:44:01.401832  740065 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:01.402348  740065 main.go:141] libmachine: Using API Version  1
	I0916 13:44:01.402374  740065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:01.402698  740065 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:01.402867  740065 main.go:141] libmachine: (ha-190751-m02) Calling .GetState
	I0916 13:44:01.404551  740065 status.go:330] ha-190751-m02 host status = "Running" (err=<nil>)
	I0916 13:44:01.404570  740065 host.go:66] Checking if "ha-190751-m02" exists ...
	I0916 13:44:01.404870  740065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:01.404907  740065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:01.419589  740065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34529
	I0916 13:44:01.420118  740065 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:01.420655  740065 main.go:141] libmachine: Using API Version  1
	I0916 13:44:01.420680  740065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:01.420962  740065 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:01.421159  740065 main.go:141] libmachine: (ha-190751-m02) Calling .GetIP
	I0916 13:44:01.423878  740065 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:44:01.424420  740065 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:44:01.424450  740065 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:44:01.424648  740065 host.go:66] Checking if "ha-190751-m02" exists ...
	I0916 13:44:01.425007  740065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:01.425047  740065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:01.439738  740065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37501
	I0916 13:44:01.440160  740065 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:01.440618  740065 main.go:141] libmachine: Using API Version  1
	I0916 13:44:01.440639  740065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:01.440926  740065 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:01.441096  740065 main.go:141] libmachine: (ha-190751-m02) Calling .DriverName
	I0916 13:44:01.441259  740065 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:44:01.441282  740065 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:44:01.443842  740065 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:44:01.444177  740065 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:44:01.444198  740065 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:44:01.444349  740065 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:44:01.444515  740065 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:44:01.444630  740065 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:44:01.444758  740065 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/id_rsa Username:docker}
	W0916 13:44:01.957882  740065 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.192:22: connect: no route to host
	I0916 13:44:01.957949  740065 retry.go:31] will retry after 209.384422ms: dial tcp 192.168.39.192:22: connect: no route to host
	W0916 13:44:05.221896  740065 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.192:22: connect: no route to host
	W0916 13:44:05.222008  740065 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.192:22: connect: no route to host
	E0916 13:44:05.222040  740065 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.192:22: connect: no route to host
	I0916 13:44:05.222048  740065 status.go:257] ha-190751-m02 status: &{Name:ha-190751-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 13:44:05.222068  740065 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.192:22: connect: no route to host
	I0916 13:44:05.222075  740065 status.go:255] checking status of ha-190751-m03 ...
	I0916 13:44:05.222412  740065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:05.222489  740065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:05.237250  740065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46527
	I0916 13:44:05.237732  740065 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:05.238207  740065 main.go:141] libmachine: Using API Version  1
	I0916 13:44:05.238234  740065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:05.238536  740065 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:05.238690  740065 main.go:141] libmachine: (ha-190751-m03) Calling .GetState
	I0916 13:44:05.240153  740065 status.go:330] ha-190751-m03 host status = "Running" (err=<nil>)
	I0916 13:44:05.240171  740065 host.go:66] Checking if "ha-190751-m03" exists ...
	I0916 13:44:05.240460  740065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:05.240501  740065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:05.255483  740065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45525
	I0916 13:44:05.255954  740065 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:05.256441  740065 main.go:141] libmachine: Using API Version  1
	I0916 13:44:05.256468  740065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:05.256832  740065 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:05.257014  740065 main.go:141] libmachine: (ha-190751-m03) Calling .GetIP
	I0916 13:44:05.259777  740065 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:44:05.260183  740065 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:44:05.260233  740065 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:44:05.260354  740065 host.go:66] Checking if "ha-190751-m03" exists ...
	I0916 13:44:05.260651  740065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:05.260702  740065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:05.274838  740065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43197
	I0916 13:44:05.275152  740065 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:05.275597  740065 main.go:141] libmachine: Using API Version  1
	I0916 13:44:05.275628  740065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:05.275921  740065 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:05.276093  740065 main.go:141] libmachine: (ha-190751-m03) Calling .DriverName
	I0916 13:44:05.276311  740065 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:44:05.276333  740065 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:44:05.279164  740065 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:44:05.279603  740065 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:44:05.279627  740065 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:44:05.279759  740065 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:44:05.279939  740065 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:44:05.280095  740065 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:44:05.280252  740065 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/id_rsa Username:docker}
	I0916 13:44:05.368659  740065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:44:05.384998  740065 kubeconfig.go:125] found "ha-190751" server: "https://192.168.39.254:8443"
	I0916 13:44:05.385025  740065 api_server.go:166] Checking apiserver status ...
	I0916 13:44:05.385056  740065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 13:44:05.399123  740065 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1390/cgroup
	W0916 13:44:05.408692  740065 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1390/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 13:44:05.408745  740065 ssh_runner.go:195] Run: ls
	I0916 13:44:05.415060  740065 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 13:44:05.420723  740065 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 13:44:05.420748  740065 status.go:422] ha-190751-m03 apiserver status = Running (err=<nil>)
	I0916 13:44:05.420759  740065 status.go:257] ha-190751-m03 status: &{Name:ha-190751-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 13:44:05.420779  740065 status.go:255] checking status of ha-190751-m04 ...
	I0916 13:44:05.421171  740065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:05.421222  740065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:05.436300  740065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35065
	I0916 13:44:05.436631  740065 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:05.437074  740065 main.go:141] libmachine: Using API Version  1
	I0916 13:44:05.437098  740065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:05.437503  740065 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:05.437701  740065 main.go:141] libmachine: (ha-190751-m04) Calling .GetState
	I0916 13:44:05.439039  740065 status.go:330] ha-190751-m04 host status = "Running" (err=<nil>)
	I0916 13:44:05.439056  740065 host.go:66] Checking if "ha-190751-m04" exists ...
	I0916 13:44:05.439454  740065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:05.439497  740065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:05.454702  740065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37137
	I0916 13:44:05.455072  740065 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:05.455534  740065 main.go:141] libmachine: Using API Version  1
	I0916 13:44:05.455557  740065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:05.455948  740065 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:05.456168  740065 main.go:141] libmachine: (ha-190751-m04) Calling .GetIP
	I0916 13:44:05.458942  740065 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:44:05.459379  740065 main.go:141] libmachine: (ha-190751-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c5:44", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:40:29 +0000 UTC Type:0 Mac:52:54:00:46:c5:44 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-190751-m04 Clientid:01:52:54:00:46:c5:44}
	I0916 13:44:05.459408  740065 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:44:05.459545  740065 host.go:66] Checking if "ha-190751-m04" exists ...
	I0916 13:44:05.459836  740065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:05.459869  740065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:05.474100  740065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34313
	I0916 13:44:05.474534  740065 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:05.475068  740065 main.go:141] libmachine: Using API Version  1
	I0916 13:44:05.475104  740065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:05.475417  740065 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:05.475581  740065 main.go:141] libmachine: (ha-190751-m04) Calling .DriverName
	I0916 13:44:05.475772  740065 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:44:05.475794  740065 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHHostname
	I0916 13:44:05.478223  740065 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:44:05.478621  740065 main.go:141] libmachine: (ha-190751-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c5:44", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:40:29 +0000 UTC Type:0 Mac:52:54:00:46:c5:44 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-190751-m04 Clientid:01:52:54:00:46:c5:44}
	I0916 13:44:05.478646  740065 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:44:05.478817  740065 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHPort
	I0916 13:44:05.478993  740065 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHKeyPath
	I0916 13:44:05.479134  740065 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHUsername
	I0916 13:44:05.479294  740065 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m04/id_rsa Username:docker}
	I0916 13:44:05.560848  740065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:44:05.574712  740065 status.go:257] ha-190751-m04 status: &{Name:ha-190751-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-190751 status -v=7 --alsologtostderr: exit status 3 (3.931002079s)

                                                
                                                
-- stdout --
	ha-190751
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-190751-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-190751-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-190751-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 13:44:08.015580  740165 out.go:345] Setting OutFile to fd 1 ...
	I0916 13:44:08.015844  740165 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:44:08.015855  740165 out.go:358] Setting ErrFile to fd 2...
	I0916 13:44:08.015859  740165 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:44:08.016012  740165 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
	I0916 13:44:08.016162  740165 out.go:352] Setting JSON to false
	I0916 13:44:08.016194  740165 mustload.go:65] Loading cluster: ha-190751
	I0916 13:44:08.016242  740165 notify.go:220] Checking for updates...
	I0916 13:44:08.016773  740165 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:44:08.016796  740165 status.go:255] checking status of ha-190751 ...
	I0916 13:44:08.017328  740165 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:08.017374  740165 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:08.034671  740165 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35319
	I0916 13:44:08.035104  740165 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:08.035708  740165 main.go:141] libmachine: Using API Version  1
	I0916 13:44:08.035732  740165 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:08.036139  740165 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:08.036348  740165 main.go:141] libmachine: (ha-190751) Calling .GetState
	I0916 13:44:08.038010  740165 status.go:330] ha-190751 host status = "Running" (err=<nil>)
	I0916 13:44:08.038028  740165 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:44:08.038305  740165 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:08.038340  740165 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:08.052786  740165 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45575
	I0916 13:44:08.053151  740165 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:08.053585  740165 main.go:141] libmachine: Using API Version  1
	I0916 13:44:08.053606  740165 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:08.053929  740165 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:08.054114  740165 main.go:141] libmachine: (ha-190751) Calling .GetIP
	I0916 13:44:08.056808  740165 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:44:08.057172  740165 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:44:08.057203  740165 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:44:08.057364  740165 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:44:08.057691  740165 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:08.057740  740165 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:08.072734  740165 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38067
	I0916 13:44:08.073191  740165 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:08.073616  740165 main.go:141] libmachine: Using API Version  1
	I0916 13:44:08.073637  740165 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:08.073925  740165 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:08.074094  740165 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:44:08.074290  740165 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:44:08.074328  740165 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:44:08.076749  740165 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:44:08.077177  740165 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:44:08.077206  740165 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:44:08.077280  740165 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:44:08.077440  740165 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:44:08.077577  740165 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:44:08.077714  740165 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:44:08.161090  740165 ssh_runner.go:195] Run: systemctl --version
	I0916 13:44:08.167292  740165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:44:08.181360  740165 kubeconfig.go:125] found "ha-190751" server: "https://192.168.39.254:8443"
	I0916 13:44:08.181394  740165 api_server.go:166] Checking apiserver status ...
	I0916 13:44:08.181422  740165 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 13:44:08.202731  740165 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup
	W0916 13:44:08.214262  740165 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 13:44:08.214321  740165 ssh_runner.go:195] Run: ls
	I0916 13:44:08.218806  740165 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 13:44:08.223207  740165 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 13:44:08.223227  740165 status.go:422] ha-190751 apiserver status = Running (err=<nil>)
	I0916 13:44:08.223239  740165 status.go:257] ha-190751 status: &{Name:ha-190751 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 13:44:08.223261  740165 status.go:255] checking status of ha-190751-m02 ...
	I0916 13:44:08.223551  740165 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:08.223602  740165 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:08.238536  740165 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32815
	I0916 13:44:08.238934  740165 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:08.239447  740165 main.go:141] libmachine: Using API Version  1
	I0916 13:44:08.239471  740165 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:08.239781  740165 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:08.239952  740165 main.go:141] libmachine: (ha-190751-m02) Calling .GetState
	I0916 13:44:08.241492  740165 status.go:330] ha-190751-m02 host status = "Running" (err=<nil>)
	I0916 13:44:08.241509  740165 host.go:66] Checking if "ha-190751-m02" exists ...
	I0916 13:44:08.241878  740165 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:08.241925  740165 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:08.256383  740165 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43221
	I0916 13:44:08.256774  740165 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:08.257250  740165 main.go:141] libmachine: Using API Version  1
	I0916 13:44:08.257272  740165 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:08.257555  740165 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:08.257750  740165 main.go:141] libmachine: (ha-190751-m02) Calling .GetIP
	I0916 13:44:08.260277  740165 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:44:08.260693  740165 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:44:08.260716  740165 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:44:08.260862  740165 host.go:66] Checking if "ha-190751-m02" exists ...
	I0916 13:44:08.261142  740165 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:08.261174  740165 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:08.275042  740165 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46547
	I0916 13:44:08.275459  740165 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:08.275902  740165 main.go:141] libmachine: Using API Version  1
	I0916 13:44:08.275922  740165 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:08.276219  740165 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:08.276391  740165 main.go:141] libmachine: (ha-190751-m02) Calling .DriverName
	I0916 13:44:08.276567  740165 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:44:08.276589  740165 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:44:08.278875  740165 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:44:08.279245  740165 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:44:08.279272  740165 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:44:08.279405  740165 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:44:08.279567  740165 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:44:08.279695  740165 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:44:08.279789  740165 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/id_rsa Username:docker}
	W0916 13:44:08.297856  740165 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.192:22: connect: no route to host
	I0916 13:44:08.297908  740165 retry.go:31] will retry after 201.777671ms: dial tcp 192.168.39.192:22: connect: no route to host
	W0916 13:44:11.557894  740165 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.192:22: connect: no route to host
	W0916 13:44:11.557984  740165 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.192:22: connect: no route to host
	E0916 13:44:11.558001  740165 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.192:22: connect: no route to host
	I0916 13:44:11.558009  740165 status.go:257] ha-190751-m02 status: &{Name:ha-190751-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 13:44:11.558041  740165 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.192:22: connect: no route to host
	I0916 13:44:11.558048  740165 status.go:255] checking status of ha-190751-m03 ...
	I0916 13:44:11.558357  740165 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:11.558409  740165 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:11.573231  740165 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38989
	I0916 13:44:11.573653  740165 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:11.574142  740165 main.go:141] libmachine: Using API Version  1
	I0916 13:44:11.574165  740165 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:11.574486  740165 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:11.574671  740165 main.go:141] libmachine: (ha-190751-m03) Calling .GetState
	I0916 13:44:11.576014  740165 status.go:330] ha-190751-m03 host status = "Running" (err=<nil>)
	I0916 13:44:11.576031  740165 host.go:66] Checking if "ha-190751-m03" exists ...
	I0916 13:44:11.576460  740165 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:11.576521  740165 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:11.591363  740165 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36023
	I0916 13:44:11.591791  740165 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:11.592459  740165 main.go:141] libmachine: Using API Version  1
	I0916 13:44:11.592479  740165 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:11.592812  740165 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:11.592992  740165 main.go:141] libmachine: (ha-190751-m03) Calling .GetIP
	I0916 13:44:11.595792  740165 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:44:11.596201  740165 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:44:11.596223  740165 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:44:11.596368  740165 host.go:66] Checking if "ha-190751-m03" exists ...
	I0916 13:44:11.596656  740165 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:11.596692  740165 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:11.611179  740165 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44185
	I0916 13:44:11.611545  740165 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:11.612065  740165 main.go:141] libmachine: Using API Version  1
	I0916 13:44:11.612087  740165 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:11.612386  740165 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:11.612596  740165 main.go:141] libmachine: (ha-190751-m03) Calling .DriverName
	I0916 13:44:11.612816  740165 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:44:11.612847  740165 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:44:11.615295  740165 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:44:11.615704  740165 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:44:11.615732  740165 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:44:11.615816  740165 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:44:11.615979  740165 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:44:11.616122  740165 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:44:11.616246  740165 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/id_rsa Username:docker}
	I0916 13:44:11.696933  740165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:44:11.711585  740165 kubeconfig.go:125] found "ha-190751" server: "https://192.168.39.254:8443"
	I0916 13:44:11.711617  740165 api_server.go:166] Checking apiserver status ...
	I0916 13:44:11.711657  740165 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 13:44:11.728146  740165 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1390/cgroup
	W0916 13:44:11.737980  740165 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1390/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 13:44:11.738039  740165 ssh_runner.go:195] Run: ls
	I0916 13:44:11.742220  740165 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 13:44:11.748218  740165 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 13:44:11.748240  740165 status.go:422] ha-190751-m03 apiserver status = Running (err=<nil>)
	I0916 13:44:11.748251  740165 status.go:257] ha-190751-m03 status: &{Name:ha-190751-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 13:44:11.748271  740165 status.go:255] checking status of ha-190751-m04 ...
	I0916 13:44:11.748573  740165 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:11.748628  740165 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:11.763492  740165 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45485
	I0916 13:44:11.763939  740165 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:11.764419  740165 main.go:141] libmachine: Using API Version  1
	I0916 13:44:11.764441  740165 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:11.764780  740165 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:11.764946  740165 main.go:141] libmachine: (ha-190751-m04) Calling .GetState
	I0916 13:44:11.766401  740165 status.go:330] ha-190751-m04 host status = "Running" (err=<nil>)
	I0916 13:44:11.766420  740165 host.go:66] Checking if "ha-190751-m04" exists ...
	I0916 13:44:11.766737  740165 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:11.766775  740165 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:11.781311  740165 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36851
	I0916 13:44:11.781746  740165 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:11.782215  740165 main.go:141] libmachine: Using API Version  1
	I0916 13:44:11.782234  740165 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:11.782526  740165 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:11.782706  740165 main.go:141] libmachine: (ha-190751-m04) Calling .GetIP
	I0916 13:44:11.785518  740165 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:44:11.785958  740165 main.go:141] libmachine: (ha-190751-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c5:44", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:40:29 +0000 UTC Type:0 Mac:52:54:00:46:c5:44 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-190751-m04 Clientid:01:52:54:00:46:c5:44}
	I0916 13:44:11.785995  740165 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:44:11.786150  740165 host.go:66] Checking if "ha-190751-m04" exists ...
	I0916 13:44:11.786459  740165 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:11.786503  740165 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:11.800457  740165 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44121
	I0916 13:44:11.800796  740165 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:11.801224  740165 main.go:141] libmachine: Using API Version  1
	I0916 13:44:11.801240  740165 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:11.801537  740165 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:11.801729  740165 main.go:141] libmachine: (ha-190751-m04) Calling .DriverName
	I0916 13:44:11.801899  740165 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:44:11.801917  740165 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHHostname
	I0916 13:44:11.804095  740165 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:44:11.804425  740165 main.go:141] libmachine: (ha-190751-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c5:44", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:40:29 +0000 UTC Type:0 Mac:52:54:00:46:c5:44 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-190751-m04 Clientid:01:52:54:00:46:c5:44}
	I0916 13:44:11.804451  740165 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:44:11.804541  740165 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHPort
	I0916 13:44:11.804679  740165 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHKeyPath
	I0916 13:44:11.804825  740165 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHUsername
	I0916 13:44:11.804955  740165 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m04/id_rsa Username:docker}
	I0916 13:44:11.888758  740165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:44:11.902806  740165 status.go:257] ha-190751-m04 status: &{Name:ha-190751-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-190751 status -v=7 --alsologtostderr: exit status 3 (3.707414391s)

                                                
                                                
-- stdout --
	ha-190751
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-190751-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-190751-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-190751-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 13:44:14.501751  740281 out.go:345] Setting OutFile to fd 1 ...
	I0916 13:44:14.502212  740281 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:44:14.502285  740281 out.go:358] Setting ErrFile to fd 2...
	I0916 13:44:14.502303  740281 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:44:14.502722  740281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
	I0916 13:44:14.503033  740281 out.go:352] Setting JSON to false
	I0916 13:44:14.503128  740281 mustload.go:65] Loading cluster: ha-190751
	I0916 13:44:14.503237  740281 notify.go:220] Checking for updates...
	I0916 13:44:14.503898  740281 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:44:14.503922  740281 status.go:255] checking status of ha-190751 ...
	I0916 13:44:14.504378  740281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:14.504438  740281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:14.520114  740281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39419
	I0916 13:44:14.520711  740281 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:14.521412  740281 main.go:141] libmachine: Using API Version  1
	I0916 13:44:14.521432  740281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:14.521896  740281 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:14.522200  740281 main.go:141] libmachine: (ha-190751) Calling .GetState
	I0916 13:44:14.523905  740281 status.go:330] ha-190751 host status = "Running" (err=<nil>)
	I0916 13:44:14.523921  740281 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:44:14.524202  740281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:14.524233  740281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:14.539999  740281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35891
	I0916 13:44:14.540439  740281 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:14.540967  740281 main.go:141] libmachine: Using API Version  1
	I0916 13:44:14.540994  740281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:14.541285  740281 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:14.541487  740281 main.go:141] libmachine: (ha-190751) Calling .GetIP
	I0916 13:44:14.544326  740281 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:44:14.544806  740281 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:44:14.544870  740281 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:44:14.544933  740281 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:44:14.545224  740281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:14.545257  740281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:14.560140  740281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38681
	I0916 13:44:14.560568  740281 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:14.560968  740281 main.go:141] libmachine: Using API Version  1
	I0916 13:44:14.560984  740281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:14.561270  740281 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:14.561459  740281 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:44:14.561660  740281 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:44:14.561715  740281 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:44:14.564140  740281 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:44:14.564528  740281 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:44:14.564555  740281 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:44:14.564683  740281 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:44:14.564831  740281 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:44:14.564967  740281 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:44:14.565084  740281 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:44:14.653275  740281 ssh_runner.go:195] Run: systemctl --version
	I0916 13:44:14.661115  740281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:44:14.676722  740281 kubeconfig.go:125] found "ha-190751" server: "https://192.168.39.254:8443"
	I0916 13:44:14.676766  740281 api_server.go:166] Checking apiserver status ...
	I0916 13:44:14.676804  740281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 13:44:14.690207  740281 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup
	W0916 13:44:14.699920  740281 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 13:44:14.699966  740281 ssh_runner.go:195] Run: ls
	I0916 13:44:14.704455  740281 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 13:44:14.708790  740281 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 13:44:14.708813  740281 status.go:422] ha-190751 apiserver status = Running (err=<nil>)
	I0916 13:44:14.708826  740281 status.go:257] ha-190751 status: &{Name:ha-190751 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 13:44:14.708863  740281 status.go:255] checking status of ha-190751-m02 ...
	I0916 13:44:14.709166  740281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:14.709209  740281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:14.724931  740281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0916 13:44:14.725340  740281 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:14.725856  740281 main.go:141] libmachine: Using API Version  1
	I0916 13:44:14.725873  740281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:14.726214  740281 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:14.726388  740281 main.go:141] libmachine: (ha-190751-m02) Calling .GetState
	I0916 13:44:14.728063  740281 status.go:330] ha-190751-m02 host status = "Running" (err=<nil>)
	I0916 13:44:14.728105  740281 host.go:66] Checking if "ha-190751-m02" exists ...
	I0916 13:44:14.728447  740281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:14.728488  740281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:14.744273  740281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33435
	I0916 13:44:14.744709  740281 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:14.745173  740281 main.go:141] libmachine: Using API Version  1
	I0916 13:44:14.745197  740281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:14.745518  740281 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:14.745708  740281 main.go:141] libmachine: (ha-190751-m02) Calling .GetIP
	I0916 13:44:14.748126  740281 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:44:14.748578  740281 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:44:14.748609  740281 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:44:14.748678  740281 host.go:66] Checking if "ha-190751-m02" exists ...
	I0916 13:44:14.748962  740281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:14.748999  740281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:14.763457  740281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37289
	I0916 13:44:14.763908  740281 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:14.764386  740281 main.go:141] libmachine: Using API Version  1
	I0916 13:44:14.764409  740281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:14.764770  740281 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:14.764957  740281 main.go:141] libmachine: (ha-190751-m02) Calling .DriverName
	I0916 13:44:14.765132  740281 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:44:14.765153  740281 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:44:14.767588  740281 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:44:14.767992  740281 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:44:14.768019  740281 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:44:14.768180  740281 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:44:14.768375  740281 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:44:14.768531  740281 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:44:14.768649  740281 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/id_rsa Username:docker}
	W0916 13:44:17.830037  740281 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.192:22: connect: no route to host
	W0916 13:44:17.830159  740281 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.192:22: connect: no route to host
	E0916 13:44:17.830177  740281 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.192:22: connect: no route to host
	I0916 13:44:17.830184  740281 status.go:257] ha-190751-m02 status: &{Name:ha-190751-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 13:44:17.830202  740281 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.192:22: connect: no route to host
	I0916 13:44:17.830218  740281 status.go:255] checking status of ha-190751-m03 ...
	I0916 13:44:17.830530  740281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:17.830585  740281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:17.845660  740281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32845
	I0916 13:44:17.846151  740281 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:17.846678  740281 main.go:141] libmachine: Using API Version  1
	I0916 13:44:17.846708  740281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:17.846999  740281 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:17.847211  740281 main.go:141] libmachine: (ha-190751-m03) Calling .GetState
	I0916 13:44:17.848674  740281 status.go:330] ha-190751-m03 host status = "Running" (err=<nil>)
	I0916 13:44:17.848691  740281 host.go:66] Checking if "ha-190751-m03" exists ...
	I0916 13:44:17.849071  740281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:17.849114  740281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:17.863927  740281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37563
	I0916 13:44:17.864318  740281 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:17.864791  740281 main.go:141] libmachine: Using API Version  1
	I0916 13:44:17.864810  740281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:17.865170  740281 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:17.865355  740281 main.go:141] libmachine: (ha-190751-m03) Calling .GetIP
	I0916 13:44:17.867968  740281 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:44:17.868459  740281 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:44:17.868493  740281 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:44:17.868627  740281 host.go:66] Checking if "ha-190751-m03" exists ...
	I0916 13:44:17.868926  740281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:17.868960  740281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:17.882894  740281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39905
	I0916 13:44:17.883267  740281 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:17.883751  740281 main.go:141] libmachine: Using API Version  1
	I0916 13:44:17.883769  740281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:17.884069  740281 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:17.884223  740281 main.go:141] libmachine: (ha-190751-m03) Calling .DriverName
	I0916 13:44:17.884399  740281 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:44:17.884439  740281 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:44:17.886879  740281 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:44:17.887280  740281 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:44:17.887310  740281 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:44:17.887448  740281 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:44:17.887617  740281 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:44:17.887747  740281 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:44:17.887851  740281 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/id_rsa Username:docker}
	I0916 13:44:17.968853  740281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:44:17.983063  740281 kubeconfig.go:125] found "ha-190751" server: "https://192.168.39.254:8443"
	I0916 13:44:17.983089  740281 api_server.go:166] Checking apiserver status ...
	I0916 13:44:17.983119  740281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 13:44:17.995919  740281 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1390/cgroup
	W0916 13:44:18.004428  740281 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1390/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 13:44:18.004474  740281 ssh_runner.go:195] Run: ls
	I0916 13:44:18.008890  740281 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 13:44:18.013600  740281 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 13:44:18.013618  740281 status.go:422] ha-190751-m03 apiserver status = Running (err=<nil>)
	I0916 13:44:18.013626  740281 status.go:257] ha-190751-m03 status: &{Name:ha-190751-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 13:44:18.013640  740281 status.go:255] checking status of ha-190751-m04 ...
	I0916 13:44:18.013941  740281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:18.013973  740281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:18.030133  740281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33693
	I0916 13:44:18.030573  740281 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:18.031069  740281 main.go:141] libmachine: Using API Version  1
	I0916 13:44:18.031088  740281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:18.031408  740281 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:18.031610  740281 main.go:141] libmachine: (ha-190751-m04) Calling .GetState
	I0916 13:44:18.033089  740281 status.go:330] ha-190751-m04 host status = "Running" (err=<nil>)
	I0916 13:44:18.033109  740281 host.go:66] Checking if "ha-190751-m04" exists ...
	I0916 13:44:18.033448  740281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:18.033490  740281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:18.048098  740281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41587
	I0916 13:44:18.048554  740281 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:18.049058  740281 main.go:141] libmachine: Using API Version  1
	I0916 13:44:18.049089  740281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:18.049393  740281 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:18.049588  740281 main.go:141] libmachine: (ha-190751-m04) Calling .GetIP
	I0916 13:44:18.052266  740281 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:44:18.052677  740281 main.go:141] libmachine: (ha-190751-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c5:44", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:40:29 +0000 UTC Type:0 Mac:52:54:00:46:c5:44 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-190751-m04 Clientid:01:52:54:00:46:c5:44}
	I0916 13:44:18.052708  740281 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:44:18.052856  740281 host.go:66] Checking if "ha-190751-m04" exists ...
	I0916 13:44:18.053157  740281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:18.053196  740281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:18.067557  740281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38269
	I0916 13:44:18.067870  740281 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:18.068307  740281 main.go:141] libmachine: Using API Version  1
	I0916 13:44:18.068335  740281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:18.068603  740281 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:18.068802  740281 main.go:141] libmachine: (ha-190751-m04) Calling .DriverName
	I0916 13:44:18.068947  740281 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:44:18.068969  740281 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHHostname
	I0916 13:44:18.071562  740281 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:44:18.071967  740281 main.go:141] libmachine: (ha-190751-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c5:44", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:40:29 +0000 UTC Type:0 Mac:52:54:00:46:c5:44 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-190751-m04 Clientid:01:52:54:00:46:c5:44}
	I0916 13:44:18.071993  740281 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:44:18.072112  740281 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHPort
	I0916 13:44:18.072257  740281 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHKeyPath
	I0916 13:44:18.072375  740281 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHUsername
	I0916 13:44:18.072480  740281 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m04/id_rsa Username:docker}
	I0916 13:44:18.152395  740281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:44:18.166331  740281 status.go:257] ha-190751-m04 status: &{Name:ha-190751-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-190751 status -v=7 --alsologtostderr: exit status 3 (3.701668769s)

                                                
                                                
-- stdout --
	ha-190751
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-190751-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-190751-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-190751-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 13:44:23.320055  740382 out.go:345] Setting OutFile to fd 1 ...
	I0916 13:44:23.320154  740382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:44:23.320162  740382 out.go:358] Setting ErrFile to fd 2...
	I0916 13:44:23.320166  740382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:44:23.320318  740382 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
	I0916 13:44:23.320486  740382 out.go:352] Setting JSON to false
	I0916 13:44:23.320519  740382 mustload.go:65] Loading cluster: ha-190751
	I0916 13:44:23.320588  740382 notify.go:220] Checking for updates...
	I0916 13:44:23.320916  740382 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:44:23.320932  740382 status.go:255] checking status of ha-190751 ...
	I0916 13:44:23.321335  740382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:23.321385  740382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:23.339531  740382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40771
	I0916 13:44:23.339999  740382 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:23.340610  740382 main.go:141] libmachine: Using API Version  1
	I0916 13:44:23.340641  740382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:23.341001  740382 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:23.341189  740382 main.go:141] libmachine: (ha-190751) Calling .GetState
	I0916 13:44:23.342665  740382 status.go:330] ha-190751 host status = "Running" (err=<nil>)
	I0916 13:44:23.342683  740382 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:44:23.343008  740382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:23.343052  740382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:23.358152  740382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36167
	I0916 13:44:23.358578  740382 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:23.359015  740382 main.go:141] libmachine: Using API Version  1
	I0916 13:44:23.359037  740382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:23.359394  740382 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:23.359560  740382 main.go:141] libmachine: (ha-190751) Calling .GetIP
	I0916 13:44:23.362459  740382 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:44:23.362914  740382 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:44:23.362945  740382 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:44:23.363107  740382 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:44:23.363428  740382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:23.363478  740382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:23.377789  740382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41777
	I0916 13:44:23.378246  740382 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:23.378716  740382 main.go:141] libmachine: Using API Version  1
	I0916 13:44:23.378731  740382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:23.379012  740382 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:23.379183  740382 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:44:23.379383  740382 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:44:23.379413  740382 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:44:23.381993  740382 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:44:23.382375  740382 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:44:23.382401  740382 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:44:23.382582  740382 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:44:23.382728  740382 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:44:23.382831  740382 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:44:23.382977  740382 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:44:23.469119  740382 ssh_runner.go:195] Run: systemctl --version
	I0916 13:44:23.474856  740382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:44:23.490206  740382 kubeconfig.go:125] found "ha-190751" server: "https://192.168.39.254:8443"
	I0916 13:44:23.490241  740382 api_server.go:166] Checking apiserver status ...
	I0916 13:44:23.490273  740382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 13:44:23.503155  740382 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup
	W0916 13:44:23.513171  740382 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 13:44:23.513219  740382 ssh_runner.go:195] Run: ls
	I0916 13:44:23.518212  740382 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 13:44:23.522478  740382 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 13:44:23.522501  740382 status.go:422] ha-190751 apiserver status = Running (err=<nil>)
	I0916 13:44:23.522513  740382 status.go:257] ha-190751 status: &{Name:ha-190751 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 13:44:23.522536  740382 status.go:255] checking status of ha-190751-m02 ...
	I0916 13:44:23.522870  740382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:23.522910  740382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:23.538049  740382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38385
	I0916 13:44:23.538427  740382 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:23.538889  740382 main.go:141] libmachine: Using API Version  1
	I0916 13:44:23.538910  740382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:23.539226  740382 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:23.539428  740382 main.go:141] libmachine: (ha-190751-m02) Calling .GetState
	I0916 13:44:23.540891  740382 status.go:330] ha-190751-m02 host status = "Running" (err=<nil>)
	I0916 13:44:23.540910  740382 host.go:66] Checking if "ha-190751-m02" exists ...
	I0916 13:44:23.541286  740382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:23.541359  740382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:23.555280  740382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41641
	I0916 13:44:23.555626  740382 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:23.556095  740382 main.go:141] libmachine: Using API Version  1
	I0916 13:44:23.556122  740382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:23.556468  740382 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:23.556664  740382 main.go:141] libmachine: (ha-190751-m02) Calling .GetIP
	I0916 13:44:23.559425  740382 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:44:23.559864  740382 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:44:23.559891  740382 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:44:23.560217  740382 host.go:66] Checking if "ha-190751-m02" exists ...
	I0916 13:44:23.560613  740382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:23.560655  740382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:23.575846  740382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39915
	I0916 13:44:23.576282  740382 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:23.576803  740382 main.go:141] libmachine: Using API Version  1
	I0916 13:44:23.576823  740382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:23.577152  740382 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:23.577321  740382 main.go:141] libmachine: (ha-190751-m02) Calling .DriverName
	I0916 13:44:23.577484  740382 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:44:23.577502  740382 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:44:23.579932  740382 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:44:23.580313  740382 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:44:23.580335  740382 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:44:23.580450  740382 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:44:23.580591  740382 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:44:23.580722  740382 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:44:23.580870  740382 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/id_rsa Username:docker}
	W0916 13:44:26.633913  740382 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.192:22: connect: no route to host
	W0916 13:44:26.634035  740382 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.192:22: connect: no route to host
	E0916 13:44:26.634054  740382 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.192:22: connect: no route to host
	I0916 13:44:26.634065  740382 status.go:257] ha-190751-m02 status: &{Name:ha-190751-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 13:44:26.634084  740382 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.192:22: connect: no route to host
	I0916 13:44:26.634090  740382 status.go:255] checking status of ha-190751-m03 ...
	I0916 13:44:26.634395  740382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:26.634435  740382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:26.651271  740382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37579
	I0916 13:44:26.651707  740382 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:26.652228  740382 main.go:141] libmachine: Using API Version  1
	I0916 13:44:26.652250  740382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:26.652633  740382 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:26.652821  740382 main.go:141] libmachine: (ha-190751-m03) Calling .GetState
	I0916 13:44:26.654359  740382 status.go:330] ha-190751-m03 host status = "Running" (err=<nil>)
	I0916 13:44:26.654378  740382 host.go:66] Checking if "ha-190751-m03" exists ...
	I0916 13:44:26.654701  740382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:26.654745  740382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:26.669386  740382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43663
	I0916 13:44:26.669821  740382 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:26.670261  740382 main.go:141] libmachine: Using API Version  1
	I0916 13:44:26.670279  740382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:26.670555  740382 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:26.670724  740382 main.go:141] libmachine: (ha-190751-m03) Calling .GetIP
	I0916 13:44:26.673428  740382 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:44:26.673892  740382 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:44:26.673917  740382 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:44:26.674049  740382 host.go:66] Checking if "ha-190751-m03" exists ...
	I0916 13:44:26.674431  740382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:26.674476  740382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:26.688379  740382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38749
	I0916 13:44:26.688759  740382 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:26.689253  740382 main.go:141] libmachine: Using API Version  1
	I0916 13:44:26.689276  740382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:26.689606  740382 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:26.689804  740382 main.go:141] libmachine: (ha-190751-m03) Calling .DriverName
	I0916 13:44:26.689999  740382 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:44:26.690019  740382 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:44:26.692414  740382 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:44:26.692839  740382 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:44:26.692863  740382 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:44:26.693019  740382 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:44:26.693190  740382 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:44:26.693325  740382 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:44:26.693457  740382 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/id_rsa Username:docker}
	I0916 13:44:26.773210  740382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:44:26.788405  740382 kubeconfig.go:125] found "ha-190751" server: "https://192.168.39.254:8443"
	I0916 13:44:26.788435  740382 api_server.go:166] Checking apiserver status ...
	I0916 13:44:26.788473  740382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 13:44:26.801497  740382 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1390/cgroup
	W0916 13:44:26.810795  740382 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1390/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 13:44:26.810832  740382 ssh_runner.go:195] Run: ls
	I0916 13:44:26.815312  740382 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 13:44:26.819715  740382 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 13:44:26.819740  740382 status.go:422] ha-190751-m03 apiserver status = Running (err=<nil>)
	I0916 13:44:26.819774  740382 status.go:257] ha-190751-m03 status: &{Name:ha-190751-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 13:44:26.819793  740382 status.go:255] checking status of ha-190751-m04 ...
	I0916 13:44:26.820219  740382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:26.820264  740382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:26.836321  740382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43833
	I0916 13:44:26.836745  740382 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:26.837216  740382 main.go:141] libmachine: Using API Version  1
	I0916 13:44:26.837236  740382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:26.837547  740382 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:26.837747  740382 main.go:141] libmachine: (ha-190751-m04) Calling .GetState
	I0916 13:44:26.839278  740382 status.go:330] ha-190751-m04 host status = "Running" (err=<nil>)
	I0916 13:44:26.839293  740382 host.go:66] Checking if "ha-190751-m04" exists ...
	I0916 13:44:26.839570  740382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:26.839628  740382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:26.854367  740382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34303
	I0916 13:44:26.854895  740382 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:26.855397  740382 main.go:141] libmachine: Using API Version  1
	I0916 13:44:26.855426  740382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:26.855958  740382 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:26.856193  740382 main.go:141] libmachine: (ha-190751-m04) Calling .GetIP
	I0916 13:44:26.858690  740382 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:44:26.859083  740382 main.go:141] libmachine: (ha-190751-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c5:44", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:40:29 +0000 UTC Type:0 Mac:52:54:00:46:c5:44 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-190751-m04 Clientid:01:52:54:00:46:c5:44}
	I0916 13:44:26.859108  740382 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:44:26.859381  740382 host.go:66] Checking if "ha-190751-m04" exists ...
	I0916 13:44:26.859700  740382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:26.859749  740382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:26.878164  740382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46145
	I0916 13:44:26.878703  740382 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:26.879277  740382 main.go:141] libmachine: Using API Version  1
	I0916 13:44:26.879297  740382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:26.879597  740382 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:26.879790  740382 main.go:141] libmachine: (ha-190751-m04) Calling .DriverName
	I0916 13:44:26.880012  740382 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:44:26.880034  740382 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHHostname
	I0916 13:44:26.882898  740382 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:44:26.883313  740382 main.go:141] libmachine: (ha-190751-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c5:44", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:40:29 +0000 UTC Type:0 Mac:52:54:00:46:c5:44 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-190751-m04 Clientid:01:52:54:00:46:c5:44}
	I0916 13:44:26.883346  740382 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:44:26.883474  740382 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHPort
	I0916 13:44:26.883646  740382 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHKeyPath
	I0916 13:44:26.883789  740382 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHUsername
	I0916 13:44:26.883909  740382 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m04/id_rsa Username:docker}
	I0916 13:44:26.964869  740382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:44:26.978974  740382 status.go:257] ha-190751-m04 status: &{Name:ha-190751-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-190751 status -v=7 --alsologtostderr: exit status 7 (610.779418ms)

                                                
                                                
-- stdout --
	ha-190751
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-190751-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-190751-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-190751-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 13:44:34.636769  740534 out.go:345] Setting OutFile to fd 1 ...
	I0916 13:44:34.636896  740534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:44:34.636909  740534 out.go:358] Setting ErrFile to fd 2...
	I0916 13:44:34.636915  740534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:44:34.637094  740534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
	I0916 13:44:34.637255  740534 out.go:352] Setting JSON to false
	I0916 13:44:34.637286  740534 mustload.go:65] Loading cluster: ha-190751
	I0916 13:44:34.637331  740534 notify.go:220] Checking for updates...
	I0916 13:44:34.637708  740534 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:44:34.637725  740534 status.go:255] checking status of ha-190751 ...
	I0916 13:44:34.638123  740534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:34.638179  740534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:34.653132  740534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39297
	I0916 13:44:34.653675  740534 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:34.654393  740534 main.go:141] libmachine: Using API Version  1
	I0916 13:44:34.654420  740534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:34.654772  740534 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:34.654973  740534 main.go:141] libmachine: (ha-190751) Calling .GetState
	I0916 13:44:34.656477  740534 status.go:330] ha-190751 host status = "Running" (err=<nil>)
	I0916 13:44:34.656496  740534 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:44:34.656939  740534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:34.656985  740534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:34.672119  740534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40651
	I0916 13:44:34.672542  740534 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:34.672964  740534 main.go:141] libmachine: Using API Version  1
	I0916 13:44:34.672986  740534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:34.673348  740534 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:34.673523  740534 main.go:141] libmachine: (ha-190751) Calling .GetIP
	I0916 13:44:34.676597  740534 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:44:34.677089  740534 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:44:34.677113  740534 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:44:34.677248  740534 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:44:34.677639  740534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:34.677705  740534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:34.692457  740534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38983
	I0916 13:44:34.692870  740534 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:34.693410  740534 main.go:141] libmachine: Using API Version  1
	I0916 13:44:34.693433  740534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:34.693782  740534 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:34.693922  740534 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:44:34.694088  740534 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:44:34.694117  740534 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:44:34.696689  740534 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:44:34.697021  740534 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:44:34.697045  740534 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:44:34.697115  740534 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:44:34.697265  740534 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:44:34.697403  740534 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:44:34.697538  740534 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:44:34.786266  740534 ssh_runner.go:195] Run: systemctl --version
	I0916 13:44:34.792643  740534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:44:34.812519  740534 kubeconfig.go:125] found "ha-190751" server: "https://192.168.39.254:8443"
	I0916 13:44:34.812560  740534 api_server.go:166] Checking apiserver status ...
	I0916 13:44:34.812596  740534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 13:44:34.829743  740534 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup
	W0916 13:44:34.840741  740534 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 13:44:34.840794  740534 ssh_runner.go:195] Run: ls
	I0916 13:44:34.845778  740534 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 13:44:34.850159  740534 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 13:44:34.850179  740534 status.go:422] ha-190751 apiserver status = Running (err=<nil>)
	I0916 13:44:34.850192  740534 status.go:257] ha-190751 status: &{Name:ha-190751 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 13:44:34.850214  740534 status.go:255] checking status of ha-190751-m02 ...
	I0916 13:44:34.850502  740534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:34.850556  740534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:34.865419  740534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38689
	I0916 13:44:34.865916  740534 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:34.866383  740534 main.go:141] libmachine: Using API Version  1
	I0916 13:44:34.866403  740534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:34.866702  740534 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:34.866878  740534 main.go:141] libmachine: (ha-190751-m02) Calling .GetState
	I0916 13:44:34.868350  740534 status.go:330] ha-190751-m02 host status = "Stopped" (err=<nil>)
	I0916 13:44:34.868366  740534 status.go:343] host is not running, skipping remaining checks
	I0916 13:44:34.868375  740534 status.go:257] ha-190751-m02 status: &{Name:ha-190751-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 13:44:34.868391  740534 status.go:255] checking status of ha-190751-m03 ...
	I0916 13:44:34.868779  740534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:34.868821  740534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:34.883035  740534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37817
	I0916 13:44:34.883481  740534 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:34.883929  740534 main.go:141] libmachine: Using API Version  1
	I0916 13:44:34.883951  740534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:34.884269  740534 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:34.884454  740534 main.go:141] libmachine: (ha-190751-m03) Calling .GetState
	I0916 13:44:34.885908  740534 status.go:330] ha-190751-m03 host status = "Running" (err=<nil>)
	I0916 13:44:34.885926  740534 host.go:66] Checking if "ha-190751-m03" exists ...
	I0916 13:44:34.886208  740534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:34.886240  740534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:34.900892  740534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39397
	I0916 13:44:34.901336  740534 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:34.901844  740534 main.go:141] libmachine: Using API Version  1
	I0916 13:44:34.901876  740534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:34.902158  740534 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:34.902320  740534 main.go:141] libmachine: (ha-190751-m03) Calling .GetIP
	I0916 13:44:34.904771  740534 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:44:34.905239  740534 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:44:34.905265  740534 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:44:34.905390  740534 host.go:66] Checking if "ha-190751-m03" exists ...
	I0916 13:44:34.905699  740534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:34.905733  740534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:34.922014  740534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45023
	I0916 13:44:34.922418  740534 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:34.922911  740534 main.go:141] libmachine: Using API Version  1
	I0916 13:44:34.922931  740534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:34.923210  740534 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:34.923423  740534 main.go:141] libmachine: (ha-190751-m03) Calling .DriverName
	I0916 13:44:34.923618  740534 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:44:34.923642  740534 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:44:34.926215  740534 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:44:34.926582  740534 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:44:34.926612  740534 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:44:34.926714  740534 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:44:34.926864  740534 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:44:34.926979  740534 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:44:34.927082  740534 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/id_rsa Username:docker}
	I0916 13:44:35.004594  740534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:44:35.020373  740534 kubeconfig.go:125] found "ha-190751" server: "https://192.168.39.254:8443"
	I0916 13:44:35.020399  740534 api_server.go:166] Checking apiserver status ...
	I0916 13:44:35.020427  740534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 13:44:35.034293  740534 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1390/cgroup
	W0916 13:44:35.042476  740534 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1390/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 13:44:35.042523  740534 ssh_runner.go:195] Run: ls
	I0916 13:44:35.046628  740534 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 13:44:35.050849  740534 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 13:44:35.050869  740534 status.go:422] ha-190751-m03 apiserver status = Running (err=<nil>)
	I0916 13:44:35.050877  740534 status.go:257] ha-190751-m03 status: &{Name:ha-190751-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 13:44:35.050895  740534 status.go:255] checking status of ha-190751-m04 ...
	I0916 13:44:35.051212  740534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:35.051253  740534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:35.066181  740534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33035
	I0916 13:44:35.066568  740534 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:35.067047  740534 main.go:141] libmachine: Using API Version  1
	I0916 13:44:35.067069  740534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:35.067426  740534 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:35.067597  740534 main.go:141] libmachine: (ha-190751-m04) Calling .GetState
	I0916 13:44:35.069140  740534 status.go:330] ha-190751-m04 host status = "Running" (err=<nil>)
	I0916 13:44:35.069159  740534 host.go:66] Checking if "ha-190751-m04" exists ...
	I0916 13:44:35.069443  740534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:35.069474  740534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:35.084012  740534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39293
	I0916 13:44:35.084465  740534 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:35.084893  740534 main.go:141] libmachine: Using API Version  1
	I0916 13:44:35.084911  740534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:35.085202  740534 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:35.085357  740534 main.go:141] libmachine: (ha-190751-m04) Calling .GetIP
	I0916 13:44:35.087981  740534 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:44:35.088522  740534 main.go:141] libmachine: (ha-190751-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c5:44", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:40:29 +0000 UTC Type:0 Mac:52:54:00:46:c5:44 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-190751-m04 Clientid:01:52:54:00:46:c5:44}
	I0916 13:44:35.088548  740534 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:44:35.088734  740534 host.go:66] Checking if "ha-190751-m04" exists ...
	I0916 13:44:35.089027  740534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:35.089060  740534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:35.103883  740534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46091
	I0916 13:44:35.104282  740534 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:35.104729  740534 main.go:141] libmachine: Using API Version  1
	I0916 13:44:35.104748  740534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:35.105047  740534 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:35.105209  740534 main.go:141] libmachine: (ha-190751-m04) Calling .DriverName
	I0916 13:44:35.105382  740534 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:44:35.105404  740534 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHHostname
	I0916 13:44:35.107852  740534 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:44:35.108235  740534 main.go:141] libmachine: (ha-190751-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c5:44", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:40:29 +0000 UTC Type:0 Mac:52:54:00:46:c5:44 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-190751-m04 Clientid:01:52:54:00:46:c5:44}
	I0916 13:44:35.108254  740534 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:44:35.108413  740534 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHPort
	I0916 13:44:35.108586  740534 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHKeyPath
	I0916 13:44:35.108721  740534 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHUsername
	I0916 13:44:35.108840  740534 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m04/id_rsa Username:docker}
	I0916 13:44:35.189128  740534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:44:35.203259  740534 status.go:257] ha-190751-m04 status: &{Name:ha-190751-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-190751 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-190751 -n ha-190751
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-190751 logs -n 25: (1.271831632s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-190751 cp ha-190751-m03:/home/docker/cp-test.txt                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751:/home/docker/cp-test_ha-190751-m03_ha-190751.txt                       |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n ha-190751 sudo cat                                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | /home/docker/cp-test_ha-190751-m03_ha-190751.txt                                 |           |         |         |                     |                     |
	| cp      | ha-190751 cp ha-190751-m03:/home/docker/cp-test.txt                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m02:/home/docker/cp-test_ha-190751-m03_ha-190751-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n ha-190751-m02 sudo cat                                          | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | /home/docker/cp-test_ha-190751-m03_ha-190751-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-190751 cp ha-190751-m03:/home/docker/cp-test.txt                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04:/home/docker/cp-test_ha-190751-m03_ha-190751-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n ha-190751-m04 sudo cat                                          | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | /home/docker/cp-test_ha-190751-m03_ha-190751-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-190751 cp testdata/cp-test.txt                                                | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-190751 cp ha-190751-m04:/home/docker/cp-test.txt                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3557247571/001/cp-test_ha-190751-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-190751 cp ha-190751-m04:/home/docker/cp-test.txt                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751:/home/docker/cp-test_ha-190751-m04_ha-190751.txt                       |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n ha-190751 sudo cat                                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | /home/docker/cp-test_ha-190751-m04_ha-190751.txt                                 |           |         |         |                     |                     |
	| cp      | ha-190751 cp ha-190751-m04:/home/docker/cp-test.txt                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m02:/home/docker/cp-test_ha-190751-m04_ha-190751-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n ha-190751-m02 sudo cat                                          | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | /home/docker/cp-test_ha-190751-m04_ha-190751-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-190751 cp ha-190751-m04:/home/docker/cp-test.txt                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m03:/home/docker/cp-test_ha-190751-m04_ha-190751-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n ha-190751-m03 sudo cat                                          | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | /home/docker/cp-test_ha-190751-m04_ha-190751-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-190751 node stop m02 -v=7                                                     | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-190751 node start m02 -v=7                                                    | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:43 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 13:36:56
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 13:36:56.678517  735111 out.go:345] Setting OutFile to fd 1 ...
	I0916 13:36:56.678787  735111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:36:56.678797  735111 out.go:358] Setting ErrFile to fd 2...
	I0916 13:36:56.678801  735111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:36:56.679003  735111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
	I0916 13:36:56.679607  735111 out.go:352] Setting JSON to false
	I0916 13:36:56.680520  735111 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":11966,"bootTime":1726481851,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 13:36:56.680631  735111 start.go:139] virtualization: kvm guest
	I0916 13:36:56.682617  735111 out.go:177] * [ha-190751] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 13:36:56.683792  735111 out.go:177]   - MINIKUBE_LOCATION=19652
	I0916 13:36:56.683791  735111 notify.go:220] Checking for updates...
	I0916 13:36:56.685057  735111 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 13:36:56.686202  735111 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19652-713072/kubeconfig
	I0916 13:36:56.687271  735111 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 13:36:56.688199  735111 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 13:36:56.689143  735111 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 13:36:56.690257  735111 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 13:36:56.723912  735111 out.go:177] * Using the kvm2 driver based on user configuration
	I0916 13:36:56.725038  735111 start.go:297] selected driver: kvm2
	I0916 13:36:56.725048  735111 start.go:901] validating driver "kvm2" against <nil>
	I0916 13:36:56.725058  735111 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 13:36:56.725720  735111 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 13:36:56.725788  735111 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19652-713072/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 13:36:56.739803  735111 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 13:36:56.739851  735111 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 13:36:56.740082  735111 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 13:36:56.740112  735111 cni.go:84] Creating CNI manager for ""
	I0916 13:36:56.740151  735111 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 13:36:56.740158  735111 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 13:36:56.740208  735111 start.go:340] cluster config:
	{Name:ha-190751 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-190751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0916 13:36:56.740299  735111 iso.go:125] acquiring lock: {Name:mk66d96ffbd424a8ca76a8604dfbe200d58305de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 13:36:56.741805  735111 out.go:177] * Starting "ha-190751" primary control-plane node in "ha-190751" cluster
	I0916 13:36:56.742781  735111 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 13:36:56.742820  735111 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 13:36:56.742829  735111 cache.go:56] Caching tarball of preloaded images
	I0916 13:36:56.742896  735111 preload.go:172] Found /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 13:36:56.742905  735111 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 13:36:56.743197  735111 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/config.json ...
	I0916 13:36:56.743218  735111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/config.json: {Name:mk79170c9af09964bad9fa686bda7acb0bb551ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:36:56.743344  735111 start.go:360] acquireMachinesLock for ha-190751: {Name:mke8f8f8ba61009cdea7a3d88b50b9f6ae6e1362 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 13:36:56.743372  735111 start.go:364] duration metric: took 14.904µs to acquireMachinesLock for "ha-190751"
	I0916 13:36:56.743390  735111 start.go:93] Provisioning new machine with config: &{Name:ha-190751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-190751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 13:36:56.743443  735111 start.go:125] createHost starting for "" (driver="kvm2")
	I0916 13:36:56.744759  735111 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 13:36:56.744866  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:36:56.744897  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:36:56.758738  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43649
	I0916 13:36:56.759112  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:36:56.759587  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:36:56.759607  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:36:56.759901  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:36:56.760105  735111 main.go:141] libmachine: (ha-190751) Calling .GetMachineName
	I0916 13:36:56.760231  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:36:56.760374  735111 start.go:159] libmachine.API.Create for "ha-190751" (driver="kvm2")
	I0916 13:36:56.760406  735111 client.go:168] LocalClient.Create starting
	I0916 13:36:56.760439  735111 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem
	I0916 13:36:56.760479  735111 main.go:141] libmachine: Decoding PEM data...
	I0916 13:36:56.760496  735111 main.go:141] libmachine: Parsing certificate...
	I0916 13:36:56.760560  735111 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem
	I0916 13:36:56.760578  735111 main.go:141] libmachine: Decoding PEM data...
	I0916 13:36:56.760592  735111 main.go:141] libmachine: Parsing certificate...
	I0916 13:36:56.760612  735111 main.go:141] libmachine: Running pre-create checks...
	I0916 13:36:56.760620  735111 main.go:141] libmachine: (ha-190751) Calling .PreCreateCheck
	I0916 13:36:56.761019  735111 main.go:141] libmachine: (ha-190751) Calling .GetConfigRaw
	I0916 13:36:56.761357  735111 main.go:141] libmachine: Creating machine...
	I0916 13:36:56.761369  735111 main.go:141] libmachine: (ha-190751) Calling .Create
	I0916 13:36:56.761471  735111 main.go:141] libmachine: (ha-190751) Creating KVM machine...
	I0916 13:36:56.762874  735111 main.go:141] libmachine: (ha-190751) DBG | found existing default KVM network
	I0916 13:36:56.763511  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:36:56.763387  735134 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I0916 13:36:56.763544  735111 main.go:141] libmachine: (ha-190751) DBG | created network xml: 
	I0916 13:36:56.763557  735111 main.go:141] libmachine: (ha-190751) DBG | <network>
	I0916 13:36:56.763573  735111 main.go:141] libmachine: (ha-190751) DBG |   <name>mk-ha-190751</name>
	I0916 13:36:56.763580  735111 main.go:141] libmachine: (ha-190751) DBG |   <dns enable='no'/>
	I0916 13:36:56.763585  735111 main.go:141] libmachine: (ha-190751) DBG |   
	I0916 13:36:56.763592  735111 main.go:141] libmachine: (ha-190751) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0916 13:36:56.763597  735111 main.go:141] libmachine: (ha-190751) DBG |     <dhcp>
	I0916 13:36:56.763604  735111 main.go:141] libmachine: (ha-190751) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0916 13:36:56.763611  735111 main.go:141] libmachine: (ha-190751) DBG |     </dhcp>
	I0916 13:36:56.763621  735111 main.go:141] libmachine: (ha-190751) DBG |   </ip>
	I0916 13:36:56.763628  735111 main.go:141] libmachine: (ha-190751) DBG |   
	I0916 13:36:56.763641  735111 main.go:141] libmachine: (ha-190751) DBG | </network>
	I0916 13:36:56.763652  735111 main.go:141] libmachine: (ha-190751) DBG | 
	I0916 13:36:56.768237  735111 main.go:141] libmachine: (ha-190751) DBG | trying to create private KVM network mk-ha-190751 192.168.39.0/24...
	I0916 13:36:56.829521  735111 main.go:141] libmachine: (ha-190751) DBG | private KVM network mk-ha-190751 192.168.39.0/24 created
	I0916 13:36:56.829557  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:36:56.829473  735134 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 13:36:56.829572  735111 main.go:141] libmachine: (ha-190751) Setting up store path in /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751 ...
	I0916 13:36:56.829590  735111 main.go:141] libmachine: (ha-190751) Building disk image from file:///home/jenkins/minikube-integration/19652-713072/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 13:36:56.829615  735111 main.go:141] libmachine: (ha-190751) Downloading /home/jenkins/minikube-integration/19652-713072/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19652-713072/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 13:36:57.095789  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:36:57.095611  735134 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa...
	I0916 13:36:57.157560  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:36:57.157443  735134 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/ha-190751.rawdisk...
	I0916 13:36:57.157596  735111 main.go:141] libmachine: (ha-190751) DBG | Writing magic tar header
	I0916 13:36:57.157615  735111 main.go:141] libmachine: (ha-190751) DBG | Writing SSH key tar header
	I0916 13:36:57.157625  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:36:57.157549  735134 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751 ...
	I0916 13:36:57.157641  735111 main.go:141] libmachine: (ha-190751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751
	I0916 13:36:57.157724  735111 main.go:141] libmachine: (ha-190751) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751 (perms=drwx------)
	I0916 13:36:57.157752  735111 main.go:141] libmachine: (ha-190751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072/.minikube/machines
	I0916 13:36:57.157764  735111 main.go:141] libmachine: (ha-190751) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072/.minikube/machines (perms=drwxr-xr-x)
	I0916 13:36:57.157777  735111 main.go:141] libmachine: (ha-190751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 13:36:57.157804  735111 main.go:141] libmachine: (ha-190751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072
	I0916 13:36:57.157815  735111 main.go:141] libmachine: (ha-190751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 13:36:57.157826  735111 main.go:141] libmachine: (ha-190751) DBG | Checking permissions on dir: /home/jenkins
	I0916 13:36:57.157836  735111 main.go:141] libmachine: (ha-190751) DBG | Checking permissions on dir: /home
	I0916 13:36:57.157847  735111 main.go:141] libmachine: (ha-190751) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072/.minikube (perms=drwxr-xr-x)
	I0916 13:36:57.157862  735111 main.go:141] libmachine: (ha-190751) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072 (perms=drwxrwxr-x)
	I0916 13:36:57.157875  735111 main.go:141] libmachine: (ha-190751) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 13:36:57.157888  735111 main.go:141] libmachine: (ha-190751) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 13:36:57.157898  735111 main.go:141] libmachine: (ha-190751) Creating domain...
	I0916 13:36:57.157916  735111 main.go:141] libmachine: (ha-190751) DBG | Skipping /home - not owner
	I0916 13:36:57.158843  735111 main.go:141] libmachine: (ha-190751) define libvirt domain using xml: 
	I0916 13:36:57.158858  735111 main.go:141] libmachine: (ha-190751) <domain type='kvm'>
	I0916 13:36:57.158864  735111 main.go:141] libmachine: (ha-190751)   <name>ha-190751</name>
	I0916 13:36:57.158869  735111 main.go:141] libmachine: (ha-190751)   <memory unit='MiB'>2200</memory>
	I0916 13:36:57.158874  735111 main.go:141] libmachine: (ha-190751)   <vcpu>2</vcpu>
	I0916 13:36:57.158877  735111 main.go:141] libmachine: (ha-190751)   <features>
	I0916 13:36:57.158882  735111 main.go:141] libmachine: (ha-190751)     <acpi/>
	I0916 13:36:57.158886  735111 main.go:141] libmachine: (ha-190751)     <apic/>
	I0916 13:36:57.158890  735111 main.go:141] libmachine: (ha-190751)     <pae/>
	I0916 13:36:57.158901  735111 main.go:141] libmachine: (ha-190751)     
	I0916 13:36:57.158911  735111 main.go:141] libmachine: (ha-190751)   </features>
	I0916 13:36:57.158918  735111 main.go:141] libmachine: (ha-190751)   <cpu mode='host-passthrough'>
	I0916 13:36:57.158928  735111 main.go:141] libmachine: (ha-190751)   
	I0916 13:36:57.158944  735111 main.go:141] libmachine: (ha-190751)   </cpu>
	I0916 13:36:57.158954  735111 main.go:141] libmachine: (ha-190751)   <os>
	I0916 13:36:57.158978  735111 main.go:141] libmachine: (ha-190751)     <type>hvm</type>
	I0916 13:36:57.158998  735111 main.go:141] libmachine: (ha-190751)     <boot dev='cdrom'/>
	I0916 13:36:57.159028  735111 main.go:141] libmachine: (ha-190751)     <boot dev='hd'/>
	I0916 13:36:57.159049  735111 main.go:141] libmachine: (ha-190751)     <bootmenu enable='no'/>
	I0916 13:36:57.159057  735111 main.go:141] libmachine: (ha-190751)   </os>
	I0916 13:36:57.159062  735111 main.go:141] libmachine: (ha-190751)   <devices>
	I0916 13:36:57.159071  735111 main.go:141] libmachine: (ha-190751)     <disk type='file' device='cdrom'>
	I0916 13:36:57.159077  735111 main.go:141] libmachine: (ha-190751)       <source file='/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/boot2docker.iso'/>
	I0916 13:36:57.159087  735111 main.go:141] libmachine: (ha-190751)       <target dev='hdc' bus='scsi'/>
	I0916 13:36:57.159097  735111 main.go:141] libmachine: (ha-190751)       <readonly/>
	I0916 13:36:57.159105  735111 main.go:141] libmachine: (ha-190751)     </disk>
	I0916 13:36:57.159115  735111 main.go:141] libmachine: (ha-190751)     <disk type='file' device='disk'>
	I0916 13:36:57.159136  735111 main.go:141] libmachine: (ha-190751)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 13:36:57.159151  735111 main.go:141] libmachine: (ha-190751)       <source file='/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/ha-190751.rawdisk'/>
	I0916 13:36:57.159159  735111 main.go:141] libmachine: (ha-190751)       <target dev='hda' bus='virtio'/>
	I0916 13:36:57.159163  735111 main.go:141] libmachine: (ha-190751)     </disk>
	I0916 13:36:57.159171  735111 main.go:141] libmachine: (ha-190751)     <interface type='network'>
	I0916 13:36:57.159182  735111 main.go:141] libmachine: (ha-190751)       <source network='mk-ha-190751'/>
	I0916 13:36:57.159194  735111 main.go:141] libmachine: (ha-190751)       <model type='virtio'/>
	I0916 13:36:57.159201  735111 main.go:141] libmachine: (ha-190751)     </interface>
	I0916 13:36:57.159212  735111 main.go:141] libmachine: (ha-190751)     <interface type='network'>
	I0916 13:36:57.159222  735111 main.go:141] libmachine: (ha-190751)       <source network='default'/>
	I0916 13:36:57.159230  735111 main.go:141] libmachine: (ha-190751)       <model type='virtio'/>
	I0916 13:36:57.159239  735111 main.go:141] libmachine: (ha-190751)     </interface>
	I0916 13:36:57.159252  735111 main.go:141] libmachine: (ha-190751)     <serial type='pty'>
	I0916 13:36:57.159261  735111 main.go:141] libmachine: (ha-190751)       <target port='0'/>
	I0916 13:36:57.159266  735111 main.go:141] libmachine: (ha-190751)     </serial>
	I0916 13:36:57.159273  735111 main.go:141] libmachine: (ha-190751)     <console type='pty'>
	I0916 13:36:57.159282  735111 main.go:141] libmachine: (ha-190751)       <target type='serial' port='0'/>
	I0916 13:36:57.159296  735111 main.go:141] libmachine: (ha-190751)     </console>
	I0916 13:36:57.159304  735111 main.go:141] libmachine: (ha-190751)     <rng model='virtio'>
	I0916 13:36:57.159312  735111 main.go:141] libmachine: (ha-190751)       <backend model='random'>/dev/random</backend>
	I0916 13:36:57.159322  735111 main.go:141] libmachine: (ha-190751)     </rng>
	I0916 13:36:57.159328  735111 main.go:141] libmachine: (ha-190751)     
	I0916 13:36:57.159338  735111 main.go:141] libmachine: (ha-190751)     
	I0916 13:36:57.159344  735111 main.go:141] libmachine: (ha-190751)   </devices>
	I0916 13:36:57.159358  735111 main.go:141] libmachine: (ha-190751) </domain>
	I0916 13:36:57.159369  735111 main.go:141] libmachine: (ha-190751) 
	I0916 13:36:57.163337  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:69:e2:cf in network default
	I0916 13:36:57.163907  735111 main.go:141] libmachine: (ha-190751) Ensuring networks are active...
	I0916 13:36:57.163927  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:36:57.164583  735111 main.go:141] libmachine: (ha-190751) Ensuring network default is active
	I0916 13:36:57.164908  735111 main.go:141] libmachine: (ha-190751) Ensuring network mk-ha-190751 is active
	I0916 13:36:57.165378  735111 main.go:141] libmachine: (ha-190751) Getting domain xml...
	I0916 13:36:57.166090  735111 main.go:141] libmachine: (ha-190751) Creating domain...
	I0916 13:36:58.333062  735111 main.go:141] libmachine: (ha-190751) Waiting to get IP...
	I0916 13:36:58.333963  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:36:58.334354  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:36:58.334424  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:36:58.334357  735134 retry.go:31] will retry after 279.525118ms: waiting for machine to come up
	I0916 13:36:58.615804  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:36:58.616232  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:36:58.616272  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:36:58.616184  735134 retry.go:31] will retry after 363.505809ms: waiting for machine to come up
	I0916 13:36:58.981741  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:36:58.982158  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:36:58.982188  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:36:58.982109  735134 retry.go:31] will retry after 369.018808ms: waiting for machine to come up
	I0916 13:36:59.352601  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:36:59.353031  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:36:59.353063  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:36:59.352967  735134 retry.go:31] will retry after 560.553294ms: waiting for machine to come up
	I0916 13:36:59.914639  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:36:59.915027  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:36:59.915059  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:36:59.914973  735134 retry.go:31] will retry after 665.558726ms: waiting for machine to come up
	I0916 13:37:00.581880  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:00.582306  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:37:00.582332  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:37:00.582263  735134 retry.go:31] will retry after 948.01504ms: waiting for machine to come up
	I0916 13:37:01.531610  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:01.532007  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:37:01.532040  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:37:01.531979  735134 retry.go:31] will retry after 736.553093ms: waiting for machine to come up
	I0916 13:37:02.270426  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:02.270790  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:37:02.270829  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:37:02.270735  735134 retry.go:31] will retry after 1.270424871s: waiting for machine to come up
	I0916 13:37:03.543093  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:03.543487  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:37:03.543508  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:37:03.543459  735134 retry.go:31] will retry after 1.59125153s: waiting for machine to come up
	I0916 13:37:05.136091  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:05.136429  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:37:05.136458  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:37:05.136382  735134 retry.go:31] will retry after 1.693626671s: waiting for machine to come up
	I0916 13:37:06.832020  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:06.832535  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:37:06.832564  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:37:06.832491  735134 retry.go:31] will retry after 1.948764787s: waiting for machine to come up
	I0916 13:37:08.783618  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:08.784008  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:37:08.784030  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:37:08.783966  735134 retry.go:31] will retry after 2.647820583s: waiting for machine to come up
	I0916 13:37:11.433054  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:11.433446  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:37:11.433474  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:37:11.433404  735134 retry.go:31] will retry after 3.505266082s: waiting for machine to come up
	I0916 13:37:14.942445  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:14.942834  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find current IP address of domain ha-190751 in network mk-ha-190751
	I0916 13:37:14.942856  735111 main.go:141] libmachine: (ha-190751) DBG | I0916 13:37:14.942793  735134 retry.go:31] will retry after 3.656594435s: waiting for machine to come up
	I0916 13:37:18.601473  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:18.601963  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has current primary IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:18.601994  735111 main.go:141] libmachine: (ha-190751) Found IP for machine: 192.168.39.94
	I0916 13:37:18.602008  735111 main.go:141] libmachine: (ha-190751) Reserving static IP address...
	I0916 13:37:18.602385  735111 main.go:141] libmachine: (ha-190751) DBG | unable to find host DHCP lease matching {name: "ha-190751", mac: "52:54:00:c8:dd:8b", ip: "192.168.39.94"} in network mk-ha-190751
	I0916 13:37:18.672709  735111 main.go:141] libmachine: (ha-190751) Reserved static IP address: 192.168.39.94
	I0916 13:37:18.672734  735111 main.go:141] libmachine: (ha-190751) DBG | Getting to WaitForSSH function...
	I0916 13:37:18.672742  735111 main.go:141] libmachine: (ha-190751) Waiting for SSH to be available...
	I0916 13:37:18.675170  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:18.675604  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:18.675655  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:18.675818  735111 main.go:141] libmachine: (ha-190751) DBG | Using SSH client type: external
	I0916 13:37:18.675849  735111 main.go:141] libmachine: (ha-190751) DBG | Using SSH private key: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa (-rw-------)
	I0916 13:37:18.675884  735111 main.go:141] libmachine: (ha-190751) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.94 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 13:37:18.675899  735111 main.go:141] libmachine: (ha-190751) DBG | About to run SSH command:
	I0916 13:37:18.675935  735111 main.go:141] libmachine: (ha-190751) DBG | exit 0
	I0916 13:37:18.801655  735111 main.go:141] libmachine: (ha-190751) DBG | SSH cmd err, output: <nil>: 
	I0916 13:37:18.801941  735111 main.go:141] libmachine: (ha-190751) KVM machine creation complete!
	I0916 13:37:18.802283  735111 main.go:141] libmachine: (ha-190751) Calling .GetConfigRaw
	I0916 13:37:18.802859  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:37:18.803052  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:37:18.803228  735111 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 13:37:18.803245  735111 main.go:141] libmachine: (ha-190751) Calling .GetState
	I0916 13:37:18.804506  735111 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 13:37:18.804519  735111 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 13:37:18.804524  735111 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 13:37:18.804529  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:37:18.806823  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:18.807131  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:18.807155  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:18.807290  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:37:18.807448  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:18.807568  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:18.807667  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:37:18.807798  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:37:18.808046  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0916 13:37:18.808060  735111 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 13:37:18.916996  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 13:37:18.917018  735111 main.go:141] libmachine: Detecting the provisioner...
	I0916 13:37:18.917027  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:37:18.920186  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:18.920536  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:18.920568  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:18.920770  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:37:18.921013  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:18.921176  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:18.921327  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:37:18.921499  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:37:18.921739  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0916 13:37:18.921763  735111 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 13:37:19.030221  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 13:37:19.030307  735111 main.go:141] libmachine: found compatible host: buildroot
	I0916 13:37:19.030318  735111 main.go:141] libmachine: Provisioning with buildroot...
	I0916 13:37:19.030326  735111 main.go:141] libmachine: (ha-190751) Calling .GetMachineName
	I0916 13:37:19.030581  735111 buildroot.go:166] provisioning hostname "ha-190751"
	I0916 13:37:19.030614  735111 main.go:141] libmachine: (ha-190751) Calling .GetMachineName
	I0916 13:37:19.030818  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:37:19.033149  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.033497  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:19.033520  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.033659  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:37:19.033842  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:19.033992  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:19.034105  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:37:19.034240  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:37:19.034434  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0916 13:37:19.034448  735111 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-190751 && echo "ha-190751" | sudo tee /etc/hostname
	I0916 13:37:19.155215  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-190751
	
	I0916 13:37:19.155246  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:37:19.157702  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.158016  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:19.158045  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.158188  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:37:19.158387  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:19.158539  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:19.158685  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:37:19.158834  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:37:19.159057  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0916 13:37:19.159080  735111 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-190751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-190751/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-190751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 13:37:19.274380  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 13:37:19.274408  735111 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19652-713072/.minikube CaCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19652-713072/.minikube}
	I0916 13:37:19.274431  735111 buildroot.go:174] setting up certificates
	I0916 13:37:19.274442  735111 provision.go:84] configureAuth start
	I0916 13:37:19.274451  735111 main.go:141] libmachine: (ha-190751) Calling .GetMachineName
	I0916 13:37:19.274755  735111 main.go:141] libmachine: (ha-190751) Calling .GetIP
	I0916 13:37:19.277120  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.277480  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:19.277503  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.277636  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:37:19.279583  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.279832  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:19.279850  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.280031  735111 provision.go:143] copyHostCerts
	I0916 13:37:19.280058  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem
	I0916 13:37:19.280085  735111 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem, removing ...
	I0916 13:37:19.280095  735111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem
	I0916 13:37:19.280158  735111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem (1082 bytes)
	I0916 13:37:19.280230  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem
	I0916 13:37:19.280247  735111 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem, removing ...
	I0916 13:37:19.280253  735111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem
	I0916 13:37:19.280277  735111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem (1123 bytes)
	I0916 13:37:19.280315  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem
	I0916 13:37:19.280342  735111 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem, removing ...
	I0916 13:37:19.280354  735111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem
	I0916 13:37:19.280377  735111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem (1679 bytes)
	I0916 13:37:19.280421  735111 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem org=jenkins.ha-190751 san=[127.0.0.1 192.168.39.94 ha-190751 localhost minikube]
	I0916 13:37:19.358656  735111 provision.go:177] copyRemoteCerts
	I0916 13:37:19.358719  735111 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 13:37:19.358751  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:37:19.361346  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.361631  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:19.361660  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.361841  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:37:19.362025  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:19.362181  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:37:19.362298  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:37:19.447984  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 13:37:19.448069  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I0916 13:37:19.471720  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 13:37:19.471802  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 13:37:19.494723  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 13:37:19.494803  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0916 13:37:19.517505  735111 provision.go:87] duration metric: took 243.050824ms to configureAuth
	I0916 13:37:19.517532  735111 buildroot.go:189] setting minikube options for container-runtime
	I0916 13:37:19.517768  735111 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:37:19.517863  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:37:19.520489  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.520804  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:19.520836  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.520943  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:37:19.521124  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:19.521280  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:19.521380  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:37:19.521534  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:37:19.521732  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0916 13:37:19.521746  735111 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 13:37:19.747142  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 13:37:19.747168  735111 main.go:141] libmachine: Checking connection to Docker...
	I0916 13:37:19.747195  735111 main.go:141] libmachine: (ha-190751) Calling .GetURL
	I0916 13:37:19.748475  735111 main.go:141] libmachine: (ha-190751) DBG | Using libvirt version 6000000
	I0916 13:37:19.751506  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.751830  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:19.751854  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.752023  735111 main.go:141] libmachine: Docker is up and running!
	I0916 13:37:19.752039  735111 main.go:141] libmachine: Reticulating splines...
	I0916 13:37:19.752046  735111 client.go:171] duration metric: took 22.991630844s to LocalClient.Create
	I0916 13:37:19.752067  735111 start.go:167] duration metric: took 22.991694677s to libmachine.API.Create "ha-190751"
	I0916 13:37:19.752075  735111 start.go:293] postStartSetup for "ha-190751" (driver="kvm2")
	I0916 13:37:19.752084  735111 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 13:37:19.752101  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:37:19.752313  735111 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 13:37:19.752346  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:37:19.754590  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.754909  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:19.754934  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.755104  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:37:19.755250  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:19.755391  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:37:19.755530  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:37:19.840652  735111 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 13:37:19.844841  735111 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 13:37:19.844870  735111 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/addons for local assets ...
	I0916 13:37:19.844951  735111 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/files for local assets ...
	I0916 13:37:19.845056  735111 filesync.go:149] local asset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> 7205442.pem in /etc/ssl/certs
	I0916 13:37:19.845069  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> /etc/ssl/certs/7205442.pem
	I0916 13:37:19.845191  735111 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 13:37:19.855044  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 13:37:19.878510  735111 start.go:296] duration metric: took 126.418501ms for postStartSetup
	I0916 13:37:19.878588  735111 main.go:141] libmachine: (ha-190751) Calling .GetConfigRaw
	I0916 13:37:19.879237  735111 main.go:141] libmachine: (ha-190751) Calling .GetIP
	I0916 13:37:19.881802  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.882162  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:19.882191  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.882390  735111 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/config.json ...
	I0916 13:37:19.882564  735111 start.go:128] duration metric: took 23.139111441s to createHost
	I0916 13:37:19.882591  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:37:19.884751  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.885045  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:19.885083  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.885209  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:37:19.885393  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:19.885536  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:19.885701  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:37:19.885842  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:37:19.886010  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0916 13:37:19.886025  735111 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 13:37:19.994189  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726493839.969601699
	
	I0916 13:37:19.994215  735111 fix.go:216] guest clock: 1726493839.969601699
	I0916 13:37:19.994225  735111 fix.go:229] Guest: 2024-09-16 13:37:19.969601699 +0000 UTC Remote: 2024-09-16 13:37:19.882580313 +0000 UTC m=+23.238484318 (delta=87.021386ms)
	I0916 13:37:19.994252  735111 fix.go:200] guest clock delta is within tolerance: 87.021386ms
	I0916 13:37:19.994259  735111 start.go:83] releasing machines lock for "ha-190751", held for 23.25087569s
	I0916 13:37:19.994283  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:37:19.994538  735111 main.go:141] libmachine: (ha-190751) Calling .GetIP
	I0916 13:37:19.997323  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.997698  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:19.997724  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:19.997857  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:37:19.998381  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:37:19.998573  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:37:19.998692  735111 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 13:37:19.998736  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:37:19.998778  735111 ssh_runner.go:195] Run: cat /version.json
	I0916 13:37:19.998802  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:37:20.001458  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:20.001533  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:20.001871  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:20.001904  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:20.001925  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:20.001944  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:20.002037  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:37:20.002189  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:37:20.002204  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:20.002342  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:20.002375  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:37:20.002471  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:37:20.002463  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:37:20.002616  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:37:20.101835  735111 ssh_runner.go:195] Run: systemctl --version
	I0916 13:37:20.107791  735111 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 13:37:20.265880  735111 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 13:37:20.271930  735111 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 13:37:20.271994  735111 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 13:37:20.288455  735111 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 13:37:20.288478  735111 start.go:495] detecting cgroup driver to use...
	I0916 13:37:20.288548  735111 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 13:37:20.304990  735111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 13:37:20.318846  735111 docker.go:217] disabling cri-docker service (if available) ...
	I0916 13:37:20.318900  735111 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 13:37:20.332278  735111 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 13:37:20.345609  735111 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 13:37:20.461469  735111 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 13:37:20.606006  735111 docker.go:233] disabling docker service ...
	I0916 13:37:20.606088  735111 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 13:37:20.619614  735111 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 13:37:20.632364  735111 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 13:37:20.758642  735111 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 13:37:20.874000  735111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 13:37:20.887215  735111 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 13:37:20.904742  735111 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 13:37:20.904812  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:37:20.914408  735111 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 13:37:20.914475  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:37:20.923964  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:37:20.933297  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:37:20.942868  735111 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 13:37:20.952532  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:37:20.962048  735111 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:37:20.977737  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:37:20.987167  735111 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 13:37:20.995832  735111 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 13:37:20.995898  735111 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 13:37:21.009048  735111 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 13:37:21.018792  735111 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 13:37:21.130298  735111 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 13:37:21.220343  735111 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 13:37:21.220470  735111 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 13:37:21.225075  735111 start.go:563] Will wait 60s for crictl version
	I0916 13:37:21.225120  735111 ssh_runner.go:195] Run: which crictl
	I0916 13:37:21.228937  735111 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 13:37:21.267510  735111 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 13:37:21.267586  735111 ssh_runner.go:195] Run: crio --version
	I0916 13:37:21.295850  735111 ssh_runner.go:195] Run: crio --version
	I0916 13:37:21.323753  735111 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 13:37:21.324919  735111 main.go:141] libmachine: (ha-190751) Calling .GetIP
	I0916 13:37:21.327486  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:21.327801  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:21.327845  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:21.328020  735111 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 13:37:21.331975  735111 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 13:37:21.344361  735111 kubeadm.go:883] updating cluster {Name:ha-190751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-190751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 13:37:21.344463  735111 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 13:37:21.344510  735111 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 13:37:21.375985  735111 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0916 13:37:21.376057  735111 ssh_runner.go:195] Run: which lz4
	I0916 13:37:21.379835  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0916 13:37:21.379944  735111 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 13:37:21.383892  735111 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 13:37:21.383923  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0916 13:37:22.695033  735111 crio.go:462] duration metric: took 1.315122762s to copy over tarball
	I0916 13:37:22.695123  735111 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 13:37:24.632050  735111 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.936892624s)
	I0916 13:37:24.632087  735111 crio.go:469] duration metric: took 1.937024427s to extract the tarball
	I0916 13:37:24.632098  735111 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 13:37:24.667998  735111 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 13:37:24.710398  735111 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 13:37:24.710426  735111 cache_images.go:84] Images are preloaded, skipping loading
	I0916 13:37:24.710436  735111 kubeadm.go:934] updating node { 192.168.39.94 8443 v1.31.1 crio true true} ...
	I0916 13:37:24.710548  735111 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-190751 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-190751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 13:37:24.710628  735111 ssh_runner.go:195] Run: crio config
	I0916 13:37:24.758181  735111 cni.go:84] Creating CNI manager for ""
	I0916 13:37:24.758231  735111 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 13:37:24.758261  735111 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 13:37:24.758319  735111 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.94 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-190751 NodeName:ha-190751 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 13:37:24.758657  735111 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.94
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-190751"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 13:37:24.758926  735111 kube-vip.go:115] generating kube-vip config ...
	I0916 13:37:24.758973  735111 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 13:37:24.776756  735111 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 13:37:24.776868  735111 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0916 13:37:24.776928  735111 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 13:37:24.786665  735111 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 13:37:24.786733  735111 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 13:37:24.795903  735111 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0916 13:37:24.811114  735111 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 13:37:24.826580  735111 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0916 13:37:24.841958  735111 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0916 13:37:24.857386  735111 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 13:37:24.860966  735111 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 13:37:24.872483  735111 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 13:37:25.003846  735111 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 13:37:25.020742  735111 certs.go:68] Setting up /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751 for IP: 192.168.39.94
	I0916 13:37:25.020775  735111 certs.go:194] generating shared ca certs ...
	I0916 13:37:25.020796  735111 certs.go:226] acquiring lock for ca certs: {Name:mk25b35916ff3ff3777938e3e2b7794965f8a707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:37:25.021003  735111 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key
	I0916 13:37:25.021076  735111 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key
	I0916 13:37:25.021091  735111 certs.go:256] generating profile certs ...
	I0916 13:37:25.021155  735111 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.key
	I0916 13:37:25.021174  735111 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.crt with IP's: []
	I0916 13:37:25.079578  735111 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.crt ...
	I0916 13:37:25.079607  735111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.crt: {Name:mk140d1c2f4c990916187ba804583d1a9cf33684 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:37:25.079791  735111 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.key ...
	I0916 13:37:25.079810  735111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.key: {Name:mk5e962e9f96c994b7c25f532905372cf816e47b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:37:25.079905  735111 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.c2c0f481
	I0916 13:37:25.079919  735111 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.c2c0f481 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.94 192.168.39.254]
	I0916 13:37:25.235476  735111 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.c2c0f481 ...
	I0916 13:37:25.235509  735111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.c2c0f481: {Name:mk417c790613e4e78adbdd4499ae6a9c00dc3e15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:37:25.235708  735111 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.c2c0f481 ...
	I0916 13:37:25.235727  735111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.c2c0f481: {Name:mkfbbc964df63ee80e08357dfbaf68844994ce1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:37:25.235825  735111 certs.go:381] copying /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.c2c0f481 -> /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt
	I0916 13:37:25.235950  735111 certs.go:385] copying /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.c2c0f481 -> /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key
	I0916 13:37:25.236037  735111 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key
	I0916 13:37:25.236058  735111 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.crt with IP's: []
	I0916 13:37:25.593211  735111 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.crt ...
	I0916 13:37:25.593242  735111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.crt: {Name:mkd0a58170323377b51ec2422eecfc9ba233e69d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:37:25.593617  735111 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key ...
	I0916 13:37:25.593652  735111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key: {Name:mk697035e09a8239fdc475e00fc850425d13fa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:37:25.593818  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 13:37:25.593840  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 13:37:25.593854  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 13:37:25.593872  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 13:37:25.593974  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 13:37:25.594027  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 13:37:25.594051  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 13:37:25.594072  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 13:37:25.594157  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem (1338 bytes)
	W0916 13:37:25.594216  735111 certs.go:480] ignoring /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544_empty.pem, impossibly tiny 0 bytes
	I0916 13:37:25.594233  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 13:37:25.594276  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem (1082 bytes)
	I0916 13:37:25.594316  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem (1123 bytes)
	I0916 13:37:25.594356  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem (1679 bytes)
	I0916 13:37:25.594431  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 13:37:25.594475  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> /usr/share/ca-certificates/7205442.pem
	I0916 13:37:25.594500  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:37:25.594523  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem -> /usr/share/ca-certificates/720544.pem
	I0916 13:37:25.595141  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 13:37:25.620663  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 13:37:25.642887  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 13:37:25.665084  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 13:37:25.687133  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 13:37:25.709071  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 13:37:25.732657  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 13:37:25.755038  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 13:37:25.780332  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /usr/share/ca-certificates/7205442.pem (1708 bytes)
	I0916 13:37:25.808412  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 13:37:25.832402  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem --> /usr/share/ca-certificates/720544.pem (1338 bytes)
	I0916 13:37:25.858022  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 13:37:25.873892  735111 ssh_runner.go:195] Run: openssl version
	I0916 13:37:25.879546  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7205442.pem && ln -fs /usr/share/ca-certificates/7205442.pem /etc/ssl/certs/7205442.pem"
	I0916 13:37:25.890171  735111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7205442.pem
	I0916 13:37:25.894507  735111 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 13:33 /usr/share/ca-certificates/7205442.pem
	I0916 13:37:25.894561  735111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7205442.pem
	I0916 13:37:25.900313  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7205442.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 13:37:25.911168  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 13:37:25.921818  735111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:37:25.926147  735111 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 12:53 /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:37:25.926200  735111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:37:25.931623  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 13:37:25.942306  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/720544.pem && ln -fs /usr/share/ca-certificates/720544.pem /etc/ssl/certs/720544.pem"
	I0916 13:37:25.952913  735111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/720544.pem
	I0916 13:37:25.957227  735111 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 13:33 /usr/share/ca-certificates/720544.pem
	I0916 13:37:25.957296  735111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/720544.pem
	I0916 13:37:25.962718  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/720544.pem /etc/ssl/certs/51391683.0"
	I0916 13:37:25.972813  735111 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 13:37:25.976658  735111 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 13:37:25.976719  735111 kubeadm.go:392] StartCluster: {Name:ha-190751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-190751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 13:37:25.976832  735111 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 13:37:25.976891  735111 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 13:37:26.012234  735111 cri.go:89] found id: ""
	I0916 13:37:26.012309  735111 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 13:37:26.022128  735111 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 13:37:26.031471  735111 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 13:37:26.040533  735111 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 13:37:26.040551  735111 kubeadm.go:157] found existing configuration files:
	
	I0916 13:37:26.040587  735111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 13:37:26.049279  735111 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 13:37:26.049314  735111 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 13:37:26.058199  735111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 13:37:26.066645  735111 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 13:37:26.066701  735111 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 13:37:26.075640  735111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 13:37:26.084115  735111 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 13:37:26.084158  735111 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 13:37:26.093121  735111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 13:37:26.101594  735111 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 13:37:26.101649  735111 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 13:37:26.110723  735111 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 13:37:26.204835  735111 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 13:37:26.204894  735111 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 13:37:26.321862  735111 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 13:37:26.321980  735111 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 13:37:26.322110  735111 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 13:37:26.331078  735111 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 13:37:26.353738  735111 out.go:235]   - Generating certificates and keys ...
	I0916 13:37:26.353891  735111 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 13:37:26.354005  735111 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 13:37:26.395930  735111 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 13:37:26.499160  735111 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 13:37:26.632167  735111 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 13:37:26.833214  735111 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 13:37:27.181214  735111 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 13:37:27.181393  735111 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-190751 localhost] and IPs [192.168.39.94 127.0.0.1 ::1]
	I0916 13:37:27.371833  735111 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 13:37:27.372003  735111 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-190751 localhost] and IPs [192.168.39.94 127.0.0.1 ::1]
	I0916 13:37:27.585152  735111 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 13:37:27.810682  735111 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 13:37:28.082953  735111 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 13:37:28.083071  735111 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 13:37:28.258523  735111 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 13:37:28.367925  735111 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 13:37:28.814879  735111 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 13:37:28.932823  735111 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 13:37:29.004465  735111 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 13:37:29.004568  735111 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 13:37:29.007213  735111 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 13:37:29.009214  735111 out.go:235]   - Booting up control plane ...
	I0916 13:37:29.009358  735111 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 13:37:29.009473  735111 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 13:37:29.009582  735111 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 13:37:29.024463  735111 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 13:37:29.030729  735111 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 13:37:29.030801  735111 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 13:37:29.175858  735111 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 13:37:29.176023  735111 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 13:37:29.693416  735111 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 518.039092ms
	I0916 13:37:29.693512  735111 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 13:37:38.879766  735111 kubeadm.go:310] [api-check] The API server is healthy after 9.191119687s
	I0916 13:37:38.891993  735111 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 13:37:38.907636  735111 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 13:37:38.947498  735111 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 13:37:38.947721  735111 kubeadm.go:310] [mark-control-plane] Marking the node ha-190751 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 13:37:38.967620  735111 kubeadm.go:310] [bootstrap-token] Using token: 19lgif.tvhngrrmbtbid3dy
	I0916 13:37:38.968812  735111 out.go:235]   - Configuring RBAC rules ...
	I0916 13:37:38.968935  735111 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 13:37:38.976592  735111 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 13:37:38.989031  735111 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 13:37:38.993154  735111 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 13:37:38.996206  735111 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 13:37:38.999675  735111 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 13:37:39.287523  735111 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 13:37:39.712013  735111 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 13:37:40.285980  735111 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 13:37:40.286947  735111 kubeadm.go:310] 
	I0916 13:37:40.287033  735111 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 13:37:40.287043  735111 kubeadm.go:310] 
	I0916 13:37:40.287161  735111 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 13:37:40.287172  735111 kubeadm.go:310] 
	I0916 13:37:40.287208  735111 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 13:37:40.287304  735111 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 13:37:40.287382  735111 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 13:37:40.287399  735111 kubeadm.go:310] 
	I0916 13:37:40.287443  735111 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 13:37:40.287449  735111 kubeadm.go:310] 
	I0916 13:37:40.287490  735111 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 13:37:40.287499  735111 kubeadm.go:310] 
	I0916 13:37:40.287563  735111 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 13:37:40.287651  735111 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 13:37:40.287711  735111 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 13:37:40.287717  735111 kubeadm.go:310] 
	I0916 13:37:40.287812  735111 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 13:37:40.287900  735111 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 13:37:40.287908  735111 kubeadm.go:310] 
	I0916 13:37:40.287998  735111 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 19lgif.tvhngrrmbtbid3dy \
	I0916 13:37:40.288167  735111 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:40463d1766828cd98d0b3d82eb62b65ad46ddd558da2fd9e3536672d6eade3c0 \
	I0916 13:37:40.288200  735111 kubeadm.go:310] 	--control-plane 
	I0916 13:37:40.288208  735111 kubeadm.go:310] 
	I0916 13:37:40.288337  735111 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 13:37:40.288347  735111 kubeadm.go:310] 
	I0916 13:37:40.288460  735111 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 19lgif.tvhngrrmbtbid3dy \
	I0916 13:37:40.288620  735111 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:40463d1766828cd98d0b3d82eb62b65ad46ddd558da2fd9e3536672d6eade3c0 
	I0916 13:37:40.289640  735111 kubeadm.go:310] W0916 13:37:26.183865     838 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 13:37:40.290030  735111 kubeadm.go:310] W0916 13:37:26.184708     838 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 13:37:40.290189  735111 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 13:37:40.290209  735111 cni.go:84] Creating CNI manager for ""
	I0916 13:37:40.290218  735111 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 13:37:40.291806  735111 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 13:37:40.292983  735111 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 13:37:40.298536  735111 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 13:37:40.298559  735111 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 13:37:40.319523  735111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 13:37:40.756144  735111 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 13:37:40.756253  735111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 13:37:40.756269  735111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-190751 minikube.k8s.io/updated_at=2024_09_16T13_37_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86 minikube.k8s.io/name=ha-190751 minikube.k8s.io/primary=true
	I0916 13:37:40.783430  735111 ops.go:34] apiserver oom_adj: -16
	I0916 13:37:40.911968  735111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 13:37:41.412359  735111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 13:37:41.553914  735111 kubeadm.go:1113] duration metric: took 797.744629ms to wait for elevateKubeSystemPrivileges
	I0916 13:37:41.553952  735111 kubeadm.go:394] duration metric: took 15.577239114s to StartCluster
	I0916 13:37:41.553973  735111 settings.go:142] acquiring lock: {Name:mka9d51f09298db6ba9006267d9a91b0a28fad59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:37:41.554044  735111 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19652-713072/kubeconfig
	I0916 13:37:41.554728  735111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/kubeconfig: {Name:mk84449075783d20927a7d708361081f8c4a2b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:37:41.554924  735111 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 13:37:41.554947  735111 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 13:37:41.554954  735111 start.go:241] waiting for startup goroutines ...
	I0916 13:37:41.554967  735111 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 13:37:41.555096  735111 addons.go:69] Setting storage-provisioner=true in profile "ha-190751"
	I0916 13:37:41.555117  735111 addons.go:234] Setting addon storage-provisioner=true in "ha-190751"
	I0916 13:37:41.555132  735111 addons.go:69] Setting default-storageclass=true in profile "ha-190751"
	I0916 13:37:41.555175  735111 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:37:41.555179  735111 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-190751"
	I0916 13:37:41.555152  735111 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:37:41.555715  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:37:41.555751  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:37:41.555720  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:37:41.555855  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:37:41.570781  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42531
	I0916 13:37:41.570887  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38727
	I0916 13:37:41.571275  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:37:41.571416  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:37:41.571837  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:37:41.571860  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:37:41.571954  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:37:41.571978  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:37:41.572205  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:37:41.572388  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:37:41.572391  735111 main.go:141] libmachine: (ha-190751) Calling .GetState
	I0916 13:37:41.573010  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:37:41.573062  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:37:41.574573  735111 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19652-713072/kubeconfig
	I0916 13:37:41.574954  735111 kapi.go:59] client config for ha-190751: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.crt", KeyFile:"/home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.key", CAFile:"/home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 13:37:41.575505  735111 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 13:37:41.575908  735111 addons.go:234] Setting addon default-storageclass=true in "ha-190751"
	I0916 13:37:41.575955  735111 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:37:41.576336  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:37:41.576383  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:37:41.588809  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39139
	I0916 13:37:41.589322  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:37:41.589856  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:37:41.589876  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:37:41.590208  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:37:41.590405  735111 main.go:141] libmachine: (ha-190751) Calling .GetState
	I0916 13:37:41.592142  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:37:41.594636  735111 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 13:37:41.595564  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
	I0916 13:37:41.596015  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:37:41.596154  735111 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 13:37:41.596174  735111 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 13:37:41.596195  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:37:41.596505  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:37:41.596526  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:37:41.596882  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:37:41.597396  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:37:41.597437  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:37:41.599683  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:41.600156  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:41.600231  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:41.600475  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:37:41.600656  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:41.600821  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:37:41.600952  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:37:41.613005  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42843
	I0916 13:37:41.613496  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:37:41.614052  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:37:41.614078  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:37:41.614432  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:37:41.614650  735111 main.go:141] libmachine: (ha-190751) Calling .GetState
	I0916 13:37:41.616031  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:37:41.616275  735111 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 13:37:41.616293  735111 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 13:37:41.616314  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:37:41.619255  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:41.619735  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:37:41.619759  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:37:41.619892  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:37:41.619996  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:37:41.620134  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:37:41.620230  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:37:41.731350  735111 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 13:37:41.743817  735111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 13:37:41.831801  735111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 13:37:42.102488  735111 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0916 13:37:42.297855  735111 main.go:141] libmachine: Making call to close driver server
	I0916 13:37:42.297879  735111 main.go:141] libmachine: (ha-190751) Calling .Close
	I0916 13:37:42.298186  735111 main.go:141] libmachine: Successfully made call to close driver server
	I0916 13:37:42.298208  735111 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 13:37:42.298217  735111 main.go:141] libmachine: Making call to close driver server
	I0916 13:37:42.298225  735111 main.go:141] libmachine: (ha-190751) Calling .Close
	I0916 13:37:42.298266  735111 main.go:141] libmachine: Making call to close driver server
	I0916 13:37:42.298295  735111 main.go:141] libmachine: (ha-190751) Calling .Close
	I0916 13:37:42.298496  735111 main.go:141] libmachine: Successfully made call to close driver server
	I0916 13:37:42.298514  735111 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 13:37:42.298587  735111 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 13:37:42.298606  735111 main.go:141] libmachine: Successfully made call to close driver server
	I0916 13:37:42.298617  735111 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 13:37:42.298621  735111 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 13:37:42.298642  735111 main.go:141] libmachine: Making call to close driver server
	I0916 13:37:42.298679  735111 main.go:141] libmachine: (ha-190751) Calling .Close
	I0916 13:37:42.298745  735111 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0916 13:37:42.298758  735111 round_trippers.go:469] Request Headers:
	I0916 13:37:42.298769  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:37:42.298783  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:37:42.298912  735111 main.go:141] libmachine: Successfully made call to close driver server
	I0916 13:37:42.298926  735111 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 13:37:42.309380  735111 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0916 13:37:42.309961  735111 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0916 13:37:42.309977  735111 round_trippers.go:469] Request Headers:
	I0916 13:37:42.309995  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:37:42.310002  735111 round_trippers.go:473]     Content-Type: application/json
	I0916 13:37:42.310007  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:37:42.312517  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:37:42.312659  735111 main.go:141] libmachine: Making call to close driver server
	I0916 13:37:42.312672  735111 main.go:141] libmachine: (ha-190751) Calling .Close
	I0916 13:37:42.312928  735111 main.go:141] libmachine: (ha-190751) DBG | Closing plugin on server side
	I0916 13:37:42.312987  735111 main.go:141] libmachine: Successfully made call to close driver server
	I0916 13:37:42.312999  735111 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 13:37:42.314412  735111 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 13:37:42.315519  735111 addons.go:510] duration metric: took 760.558523ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 13:37:42.315551  735111 start.go:246] waiting for cluster config update ...
	I0916 13:37:42.315562  735111 start.go:255] writing updated cluster config ...
	I0916 13:37:42.316877  735111 out.go:201] 
	I0916 13:37:42.318103  735111 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:37:42.318190  735111 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/config.json ...
	I0916 13:37:42.319697  735111 out.go:177] * Starting "ha-190751-m02" control-plane node in "ha-190751" cluster
	I0916 13:37:42.320749  735111 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 13:37:42.320769  735111 cache.go:56] Caching tarball of preloaded images
	I0916 13:37:42.320856  735111 preload.go:172] Found /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 13:37:42.320868  735111 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 13:37:42.320948  735111 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/config.json ...
	I0916 13:37:42.321110  735111 start.go:360] acquireMachinesLock for ha-190751-m02: {Name:mke8f8f8ba61009cdea7a3d88b50b9f6ae6e1362 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 13:37:42.321159  735111 start.go:364] duration metric: took 30.332µs to acquireMachinesLock for "ha-190751-m02"
	I0916 13:37:42.321183  735111 start.go:93] Provisioning new machine with config: &{Name:ha-190751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-190751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 13:37:42.321267  735111 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0916 13:37:42.322661  735111 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 13:37:42.322741  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:37:42.322780  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:37:42.337055  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42289
	I0916 13:37:42.337532  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:37:42.338027  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:37:42.338044  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:37:42.338383  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:37:42.338609  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetMachineName
	I0916 13:37:42.338757  735111 main.go:141] libmachine: (ha-190751-m02) Calling .DriverName
	I0916 13:37:42.338913  735111 start.go:159] libmachine.API.Create for "ha-190751" (driver="kvm2")
	I0916 13:37:42.338943  735111 client.go:168] LocalClient.Create starting
	I0916 13:37:42.338970  735111 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem
	I0916 13:37:42.339004  735111 main.go:141] libmachine: Decoding PEM data...
	I0916 13:37:42.339021  735111 main.go:141] libmachine: Parsing certificate...
	I0916 13:37:42.339090  735111 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem
	I0916 13:37:42.339114  735111 main.go:141] libmachine: Decoding PEM data...
	I0916 13:37:42.339130  735111 main.go:141] libmachine: Parsing certificate...
	I0916 13:37:42.339155  735111 main.go:141] libmachine: Running pre-create checks...
	I0916 13:37:42.339165  735111 main.go:141] libmachine: (ha-190751-m02) Calling .PreCreateCheck
	I0916 13:37:42.339311  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetConfigRaw
	I0916 13:37:42.339700  735111 main.go:141] libmachine: Creating machine...
	I0916 13:37:42.339713  735111 main.go:141] libmachine: (ha-190751-m02) Calling .Create
	I0916 13:37:42.339867  735111 main.go:141] libmachine: (ha-190751-m02) Creating KVM machine...
	I0916 13:37:42.341059  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found existing default KVM network
	I0916 13:37:42.341247  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found existing private KVM network mk-ha-190751
	I0916 13:37:42.341384  735111 main.go:141] libmachine: (ha-190751-m02) Setting up store path in /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02 ...
	I0916 13:37:42.341417  735111 main.go:141] libmachine: (ha-190751-m02) Building disk image from file:///home/jenkins/minikube-integration/19652-713072/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 13:37:42.341455  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:42.341364  735462 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 13:37:42.341541  735111 main.go:141] libmachine: (ha-190751-m02) Downloading /home/jenkins/minikube-integration/19652-713072/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19652-713072/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 13:37:42.605852  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:42.605728  735462 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/id_rsa...
	I0916 13:37:42.679360  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:42.679197  735462 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/ha-190751-m02.rawdisk...
	I0916 13:37:42.679396  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Writing magic tar header
	I0916 13:37:42.679414  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Writing SSH key tar header
	I0916 13:37:42.679425  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:42.679313  735462 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02 ...
	I0916 13:37:42.679450  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02
	I0916 13:37:42.679459  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072/.minikube/machines
	I0916 13:37:42.679481  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 13:37:42.679495  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072
	I0916 13:37:42.679539  735111 main.go:141] libmachine: (ha-190751-m02) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02 (perms=drwx------)
	I0916 13:37:42.679575  735111 main.go:141] libmachine: (ha-190751-m02) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072/.minikube/machines (perms=drwxr-xr-x)
	I0916 13:37:42.679590  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 13:37:42.679605  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Checking permissions on dir: /home/jenkins
	I0916 13:37:42.679617  735111 main.go:141] libmachine: (ha-190751-m02) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072/.minikube (perms=drwxr-xr-x)
	I0916 13:37:42.679632  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Checking permissions on dir: /home
	I0916 13:37:42.679655  735111 main.go:141] libmachine: (ha-190751-m02) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072 (perms=drwxrwxr-x)
	I0916 13:37:42.679680  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Skipping /home - not owner
	I0916 13:37:42.679689  735111 main.go:141] libmachine: (ha-190751-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 13:37:42.679704  735111 main.go:141] libmachine: (ha-190751-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 13:37:42.679714  735111 main.go:141] libmachine: (ha-190751-m02) Creating domain...
	I0916 13:37:42.680759  735111 main.go:141] libmachine: (ha-190751-m02) define libvirt domain using xml: 
	I0916 13:37:42.680791  735111 main.go:141] libmachine: (ha-190751-m02) <domain type='kvm'>
	I0916 13:37:42.680802  735111 main.go:141] libmachine: (ha-190751-m02)   <name>ha-190751-m02</name>
	I0916 13:37:42.680808  735111 main.go:141] libmachine: (ha-190751-m02)   <memory unit='MiB'>2200</memory>
	I0916 13:37:42.680816  735111 main.go:141] libmachine: (ha-190751-m02)   <vcpu>2</vcpu>
	I0916 13:37:42.680825  735111 main.go:141] libmachine: (ha-190751-m02)   <features>
	I0916 13:37:42.680833  735111 main.go:141] libmachine: (ha-190751-m02)     <acpi/>
	I0916 13:37:42.680842  735111 main.go:141] libmachine: (ha-190751-m02)     <apic/>
	I0916 13:37:42.680849  735111 main.go:141] libmachine: (ha-190751-m02)     <pae/>
	I0916 13:37:42.680857  735111 main.go:141] libmachine: (ha-190751-m02)     
	I0916 13:37:42.680864  735111 main.go:141] libmachine: (ha-190751-m02)   </features>
	I0916 13:37:42.680878  735111 main.go:141] libmachine: (ha-190751-m02)   <cpu mode='host-passthrough'>
	I0916 13:37:42.680888  735111 main.go:141] libmachine: (ha-190751-m02)   
	I0916 13:37:42.680897  735111 main.go:141] libmachine: (ha-190751-m02)   </cpu>
	I0916 13:37:42.680904  735111 main.go:141] libmachine: (ha-190751-m02)   <os>
	I0916 13:37:42.680912  735111 main.go:141] libmachine: (ha-190751-m02)     <type>hvm</type>
	I0916 13:37:42.680919  735111 main.go:141] libmachine: (ha-190751-m02)     <boot dev='cdrom'/>
	I0916 13:37:42.680928  735111 main.go:141] libmachine: (ha-190751-m02)     <boot dev='hd'/>
	I0916 13:37:42.680936  735111 main.go:141] libmachine: (ha-190751-m02)     <bootmenu enable='no'/>
	I0916 13:37:42.680947  735111 main.go:141] libmachine: (ha-190751-m02)   </os>
	I0916 13:37:42.680974  735111 main.go:141] libmachine: (ha-190751-m02)   <devices>
	I0916 13:37:42.680993  735111 main.go:141] libmachine: (ha-190751-m02)     <disk type='file' device='cdrom'>
	I0916 13:37:42.681005  735111 main.go:141] libmachine: (ha-190751-m02)       <source file='/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/boot2docker.iso'/>
	I0916 13:37:42.681015  735111 main.go:141] libmachine: (ha-190751-m02)       <target dev='hdc' bus='scsi'/>
	I0916 13:37:42.681025  735111 main.go:141] libmachine: (ha-190751-m02)       <readonly/>
	I0916 13:37:42.681039  735111 main.go:141] libmachine: (ha-190751-m02)     </disk>
	I0916 13:37:42.681049  735111 main.go:141] libmachine: (ha-190751-m02)     <disk type='file' device='disk'>
	I0916 13:37:42.681061  735111 main.go:141] libmachine: (ha-190751-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 13:37:42.681074  735111 main.go:141] libmachine: (ha-190751-m02)       <source file='/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/ha-190751-m02.rawdisk'/>
	I0916 13:37:42.681084  735111 main.go:141] libmachine: (ha-190751-m02)       <target dev='hda' bus='virtio'/>
	I0916 13:37:42.681092  735111 main.go:141] libmachine: (ha-190751-m02)     </disk>
	I0916 13:37:42.681101  735111 main.go:141] libmachine: (ha-190751-m02)     <interface type='network'>
	I0916 13:37:42.681107  735111 main.go:141] libmachine: (ha-190751-m02)       <source network='mk-ha-190751'/>
	I0916 13:37:42.681113  735111 main.go:141] libmachine: (ha-190751-m02)       <model type='virtio'/>
	I0916 13:37:42.681119  735111 main.go:141] libmachine: (ha-190751-m02)     </interface>
	I0916 13:37:42.681128  735111 main.go:141] libmachine: (ha-190751-m02)     <interface type='network'>
	I0916 13:37:42.681156  735111 main.go:141] libmachine: (ha-190751-m02)       <source network='default'/>
	I0916 13:37:42.681171  735111 main.go:141] libmachine: (ha-190751-m02)       <model type='virtio'/>
	I0916 13:37:42.681184  735111 main.go:141] libmachine: (ha-190751-m02)     </interface>
	I0916 13:37:42.681193  735111 main.go:141] libmachine: (ha-190751-m02)     <serial type='pty'>
	I0916 13:37:42.681201  735111 main.go:141] libmachine: (ha-190751-m02)       <target port='0'/>
	I0916 13:37:42.681211  735111 main.go:141] libmachine: (ha-190751-m02)     </serial>
	I0916 13:37:42.681219  735111 main.go:141] libmachine: (ha-190751-m02)     <console type='pty'>
	I0916 13:37:42.681229  735111 main.go:141] libmachine: (ha-190751-m02)       <target type='serial' port='0'/>
	I0916 13:37:42.681262  735111 main.go:141] libmachine: (ha-190751-m02)     </console>
	I0916 13:37:42.681289  735111 main.go:141] libmachine: (ha-190751-m02)     <rng model='virtio'>
	I0916 13:37:42.681304  735111 main.go:141] libmachine: (ha-190751-m02)       <backend model='random'>/dev/random</backend>
	I0916 13:37:42.681317  735111 main.go:141] libmachine: (ha-190751-m02)     </rng>
	I0916 13:37:42.681334  735111 main.go:141] libmachine: (ha-190751-m02)     
	I0916 13:37:42.681345  735111 main.go:141] libmachine: (ha-190751-m02)     
	I0916 13:37:42.681360  735111 main.go:141] libmachine: (ha-190751-m02)   </devices>
	I0916 13:37:42.681369  735111 main.go:141] libmachine: (ha-190751-m02) </domain>
	I0916 13:37:42.681380  735111 main.go:141] libmachine: (ha-190751-m02) 
	I0916 13:37:42.688231  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:1f:6c:3b in network default
	I0916 13:37:42.689057  735111 main.go:141] libmachine: (ha-190751-m02) Ensuring networks are active...
	I0916 13:37:42.689085  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:42.689818  735111 main.go:141] libmachine: (ha-190751-m02) Ensuring network default is active
	I0916 13:37:42.690179  735111 main.go:141] libmachine: (ha-190751-m02) Ensuring network mk-ha-190751 is active
	I0916 13:37:42.690645  735111 main.go:141] libmachine: (ha-190751-m02) Getting domain xml...
	I0916 13:37:42.691437  735111 main.go:141] libmachine: (ha-190751-m02) Creating domain...
	I0916 13:37:43.942323  735111 main.go:141] libmachine: (ha-190751-m02) Waiting to get IP...
	I0916 13:37:43.943256  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:43.943656  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:37:43.943679  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:43.943635  735462 retry.go:31] will retry after 295.084615ms: waiting for machine to come up
	I0916 13:37:44.240016  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:44.240562  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:37:44.240586  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:44.240509  735462 retry.go:31] will retry after 383.461675ms: waiting for machine to come up
	I0916 13:37:44.626046  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:44.626530  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:37:44.626563  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:44.626470  735462 retry.go:31] will retry after 438.005593ms: waiting for machine to come up
	I0916 13:37:45.066175  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:45.066684  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:37:45.066718  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:45.066618  735462 retry.go:31] will retry after 459.760025ms: waiting for machine to come up
	I0916 13:37:45.527795  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:45.528205  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:37:45.528228  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:45.528177  735462 retry.go:31] will retry after 749.840232ms: waiting for machine to come up
	I0916 13:37:46.279851  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:46.280287  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:37:46.280315  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:46.280234  735462 retry.go:31] will retry after 717.950644ms: waiting for machine to come up
	I0916 13:37:47.000301  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:47.000697  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:37:47.000721  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:47.000641  735462 retry.go:31] will retry after 1.10090672s: waiting for machine to come up
	I0916 13:37:48.102653  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:48.102982  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:37:48.103004  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:48.102932  735462 retry.go:31] will retry after 1.357065606s: waiting for machine to come up
	I0916 13:37:49.461205  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:49.461635  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:37:49.461685  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:49.461593  735462 retry.go:31] will retry after 1.820123754s: waiting for machine to come up
	I0916 13:37:51.284728  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:51.285283  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:37:51.285313  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:51.285227  735462 retry.go:31] will retry after 1.535295897s: waiting for machine to come up
	I0916 13:37:52.821910  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:52.822436  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:37:52.822464  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:52.822416  735462 retry.go:31] will retry after 2.276365416s: waiting for machine to come up
	I0916 13:37:55.101849  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:55.102243  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:37:55.102271  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:55.102193  735462 retry.go:31] will retry after 2.597037824s: waiting for machine to come up
	I0916 13:37:57.701131  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:37:57.701738  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:37:57.701763  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:37:57.701687  735462 retry.go:31] will retry after 3.553511192s: waiting for machine to come up
	I0916 13:38:01.259301  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:01.259684  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find current IP address of domain ha-190751-m02 in network mk-ha-190751
	I0916 13:38:01.259715  735111 main.go:141] libmachine: (ha-190751-m02) DBG | I0916 13:38:01.259645  735462 retry.go:31] will retry after 3.46552714s: waiting for machine to come up
	I0916 13:38:04.728155  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:04.728583  735111 main.go:141] libmachine: (ha-190751-m02) Found IP for machine: 192.168.39.192
	I0916 13:38:04.728609  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has current primary IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:04.728617  735111 main.go:141] libmachine: (ha-190751-m02) Reserving static IP address...
	I0916 13:38:04.729005  735111 main.go:141] libmachine: (ha-190751-m02) DBG | unable to find host DHCP lease matching {name: "ha-190751-m02", mac: "52:54:00:41:52:c1", ip: "192.168.39.192"} in network mk-ha-190751
	I0916 13:38:04.800262  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Getting to WaitForSSH function...
	I0916 13:38:04.800290  735111 main.go:141] libmachine: (ha-190751-m02) Reserved static IP address: 192.168.39.192
	I0916 13:38:04.800302  735111 main.go:141] libmachine: (ha-190751-m02) Waiting for SSH to be available...
	I0916 13:38:04.803047  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:04.803493  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:minikube Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:04.803526  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:04.803734  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Using SSH client type: external
	I0916 13:38:04.803780  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/id_rsa (-rw-------)
	I0916 13:38:04.803812  735111 main.go:141] libmachine: (ha-190751-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.192 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 13:38:04.803824  735111 main.go:141] libmachine: (ha-190751-m02) DBG | About to run SSH command:
	I0916 13:38:04.803871  735111 main.go:141] libmachine: (ha-190751-m02) DBG | exit 0
	I0916 13:38:04.925602  735111 main.go:141] libmachine: (ha-190751-m02) DBG | SSH cmd err, output: <nil>: 
	I0916 13:38:04.925905  735111 main.go:141] libmachine: (ha-190751-m02) KVM machine creation complete!
	I0916 13:38:04.926193  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetConfigRaw
	I0916 13:38:04.926774  735111 main.go:141] libmachine: (ha-190751-m02) Calling .DriverName
	I0916 13:38:04.926972  735111 main.go:141] libmachine: (ha-190751-m02) Calling .DriverName
	I0916 13:38:04.927113  735111 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 13:38:04.927130  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetState
	I0916 13:38:04.928234  735111 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 13:38:04.928251  735111 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 13:38:04.928259  735111 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 13:38:04.928267  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:38:04.930468  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:04.930807  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:04.930844  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:04.930986  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:38:04.931135  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:04.931283  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:04.931393  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:38:04.931559  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:38:04.931790  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I0916 13:38:04.931805  735111 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 13:38:05.032794  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 13:38:05.032819  735111 main.go:141] libmachine: Detecting the provisioner...
	I0916 13:38:05.032830  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:38:05.035714  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.036055  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:05.036083  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.036200  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:38:05.036385  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:05.036548  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:05.036685  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:38:05.036859  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:38:05.037049  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I0916 13:38:05.037060  735111 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 13:38:05.137958  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 13:38:05.138060  735111 main.go:141] libmachine: found compatible host: buildroot
	I0916 13:38:05.138074  735111 main.go:141] libmachine: Provisioning with buildroot...
	I0916 13:38:05.138088  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetMachineName
	I0916 13:38:05.138310  735111 buildroot.go:166] provisioning hostname "ha-190751-m02"
	I0916 13:38:05.138334  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetMachineName
	I0916 13:38:05.138539  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:38:05.140899  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.141226  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:05.141244  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.141396  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:38:05.141566  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:05.141738  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:05.141881  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:38:05.142038  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:38:05.142199  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I0916 13:38:05.142210  735111 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-190751-m02 && echo "ha-190751-m02" | sudo tee /etc/hostname
	I0916 13:38:05.259526  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-190751-m02
	
	I0916 13:38:05.259556  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:38:05.262559  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.262928  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:05.262955  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.263147  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:38:05.263355  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:05.263516  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:05.263659  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:38:05.263848  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:38:05.264041  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I0916 13:38:05.264058  735111 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-190751-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-190751-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-190751-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 13:38:05.373840  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 13:38:05.373870  735111 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19652-713072/.minikube CaCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19652-713072/.minikube}
	I0916 13:38:05.373890  735111 buildroot.go:174] setting up certificates
	I0916 13:38:05.373901  735111 provision.go:84] configureAuth start
	I0916 13:38:05.373914  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetMachineName
	I0916 13:38:05.374195  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetIP
	I0916 13:38:05.377605  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.377980  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:05.378007  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.378166  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:38:05.380495  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.380835  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:05.380864  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.381025  735111 provision.go:143] copyHostCerts
	I0916 13:38:05.381056  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem
	I0916 13:38:05.381083  735111 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem, removing ...
	I0916 13:38:05.381092  735111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem
	I0916 13:38:05.381156  735111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem (1082 bytes)
	I0916 13:38:05.381241  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem
	I0916 13:38:05.381259  735111 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem, removing ...
	I0916 13:38:05.381263  735111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem
	I0916 13:38:05.381289  735111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem (1123 bytes)
	I0916 13:38:05.381346  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem
	I0916 13:38:05.381363  735111 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem, removing ...
	I0916 13:38:05.381369  735111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem
	I0916 13:38:05.381391  735111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem (1679 bytes)
	I0916 13:38:05.381452  735111 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem org=jenkins.ha-190751-m02 san=[127.0.0.1 192.168.39.192 ha-190751-m02 localhost minikube]
	I0916 13:38:05.637241  735111 provision.go:177] copyRemoteCerts
	I0916 13:38:05.637298  735111 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 13:38:05.637322  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:38:05.639811  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.640189  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:05.640221  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.640337  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:38:05.640528  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:05.640702  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:38:05.640863  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/id_rsa Username:docker}
	I0916 13:38:05.723650  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 13:38:05.723719  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 13:38:05.750479  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 13:38:05.750550  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 13:38:05.773752  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 13:38:05.773855  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 13:38:05.796174  735111 provision.go:87] duration metric: took 422.260451ms to configureAuth
	I0916 13:38:05.796199  735111 buildroot.go:189] setting minikube options for container-runtime
	I0916 13:38:05.796381  735111 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:38:05.796473  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:38:05.798924  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.799224  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:05.799253  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:05.799446  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:38:05.799646  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:05.799813  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:05.799976  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:38:05.800123  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:38:05.800291  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I0916 13:38:05.800306  735111 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 13:38:06.020208  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 13:38:06.020242  735111 main.go:141] libmachine: Checking connection to Docker...
	I0916 13:38:06.020252  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetURL
	I0916 13:38:06.021653  735111 main.go:141] libmachine: (ha-190751-m02) DBG | Using libvirt version 6000000
	I0916 13:38:06.024072  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.024436  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:06.024466  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.024576  735111 main.go:141] libmachine: Docker is up and running!
	I0916 13:38:06.024590  735111 main.go:141] libmachine: Reticulating splines...
	I0916 13:38:06.024599  735111 client.go:171] duration metric: took 23.685647791s to LocalClient.Create
	I0916 13:38:06.024624  735111 start.go:167] duration metric: took 23.685713191s to libmachine.API.Create "ha-190751"
	I0916 13:38:06.024636  735111 start.go:293] postStartSetup for "ha-190751-m02" (driver="kvm2")
	I0916 13:38:06.024648  735111 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 13:38:06.024674  735111 main.go:141] libmachine: (ha-190751-m02) Calling .DriverName
	I0916 13:38:06.024937  735111 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 13:38:06.024957  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:38:06.026882  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.027186  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:06.027211  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.027329  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:38:06.027492  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:06.027649  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:38:06.027787  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/id_rsa Username:docker}
	I0916 13:38:06.107825  735111 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 13:38:06.112226  735111 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 13:38:06.112253  735111 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/addons for local assets ...
	I0916 13:38:06.112340  735111 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/files for local assets ...
	I0916 13:38:06.112437  735111 filesync.go:149] local asset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> 7205442.pem in /etc/ssl/certs
	I0916 13:38:06.112449  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> /etc/ssl/certs/7205442.pem
	I0916 13:38:06.112528  735111 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 13:38:06.121914  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 13:38:06.145214  735111 start.go:296] duration metric: took 120.567037ms for postStartSetup
	I0916 13:38:06.145254  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetConfigRaw
	I0916 13:38:06.145854  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetIP
	I0916 13:38:06.148213  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.148585  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:06.148613  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.148814  735111 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/config.json ...
	I0916 13:38:06.149003  735111 start.go:128] duration metric: took 23.827724525s to createHost
	I0916 13:38:06.149027  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:38:06.151115  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.151449  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:06.151485  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.151581  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:38:06.151739  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:06.151861  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:06.151984  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:38:06.152149  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:38:06.152361  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I0916 13:38:06.152376  735111 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 13:38:06.254031  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726493886.213043192
	
	I0916 13:38:06.254059  735111 fix.go:216] guest clock: 1726493886.213043192
	I0916 13:38:06.254069  735111 fix.go:229] Guest: 2024-09-16 13:38:06.213043192 +0000 UTC Remote: 2024-09-16 13:38:06.149015328 +0000 UTC m=+69.504919332 (delta=64.027864ms)
	I0916 13:38:06.254094  735111 fix.go:200] guest clock delta is within tolerance: 64.027864ms
	I0916 13:38:06.254103  735111 start.go:83] releasing machines lock for "ha-190751-m02", held for 23.932931473s
	I0916 13:38:06.254131  735111 main.go:141] libmachine: (ha-190751-m02) Calling .DriverName
	I0916 13:38:06.254359  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetIP
	I0916 13:38:06.256826  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.257114  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:06.257145  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.259449  735111 out.go:177] * Found network options:
	I0916 13:38:06.260782  735111 out.go:177]   - NO_PROXY=192.168.39.94
	W0916 13:38:06.261938  735111 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 13:38:06.261970  735111 main.go:141] libmachine: (ha-190751-m02) Calling .DriverName
	I0916 13:38:06.262427  735111 main.go:141] libmachine: (ha-190751-m02) Calling .DriverName
	I0916 13:38:06.262614  735111 main.go:141] libmachine: (ha-190751-m02) Calling .DriverName
	I0916 13:38:06.262735  735111 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 13:38:06.262778  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	W0916 13:38:06.262835  735111 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 13:38:06.262925  735111 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 13:38:06.262946  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:38:06.265374  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.265773  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:06.265797  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.265852  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.265915  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:38:06.266074  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:06.266214  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:38:06.266309  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:06.266322  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/id_rsa Username:docker}
	I0916 13:38:06.266330  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:06.266465  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:38:06.266569  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:38:06.266688  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:38:06.266825  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/id_rsa Username:docker}
	I0916 13:38:06.504116  735111 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 13:38:06.509809  735111 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 13:38:06.509877  735111 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 13:38:06.527632  735111 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 13:38:06.527657  735111 start.go:495] detecting cgroup driver to use...
	I0916 13:38:06.527782  735111 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 13:38:06.544086  735111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 13:38:06.557351  735111 docker.go:217] disabling cri-docker service (if available) ...
	I0916 13:38:06.557400  735111 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 13:38:06.570277  735111 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 13:38:06.583266  735111 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 13:38:06.703947  735111 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 13:38:06.860845  735111 docker.go:233] disabling docker service ...
	I0916 13:38:06.860920  735111 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 13:38:06.884863  735111 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 13:38:06.897537  735111 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 13:38:07.025766  735111 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 13:38:07.141630  735111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 13:38:07.155310  735111 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 13:38:07.173092  735111 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 13:38:07.173165  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:38:07.183550  735111 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 13:38:07.183607  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:38:07.193383  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:38:07.203087  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:38:07.214974  735111 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 13:38:07.225114  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:38:07.234675  735111 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:38:07.252702  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:38:07.262650  735111 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 13:38:07.271745  735111 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 13:38:07.271787  735111 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 13:38:07.284119  735111 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 13:38:07.293938  735111 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 13:38:07.404511  735111 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 13:38:07.493651  735111 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 13:38:07.493733  735111 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 13:38:07.498368  735111 start.go:563] Will wait 60s for crictl version
	I0916 13:38:07.498416  735111 ssh_runner.go:195] Run: which crictl
	I0916 13:38:07.501982  735111 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 13:38:07.540227  735111 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 13:38:07.540325  735111 ssh_runner.go:195] Run: crio --version
	I0916 13:38:07.567997  735111 ssh_runner.go:195] Run: crio --version
	I0916 13:38:07.597231  735111 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 13:38:07.598490  735111 out.go:177]   - env NO_PROXY=192.168.39.94
	I0916 13:38:07.599534  735111 main.go:141] libmachine: (ha-190751-m02) Calling .GetIP
	I0916 13:38:07.602146  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:07.602513  735111 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:56 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:38:07.602537  735111 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:38:07.602694  735111 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 13:38:07.606644  735111 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 13:38:07.619430  735111 mustload.go:65] Loading cluster: ha-190751
	I0916 13:38:07.619642  735111 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:38:07.619896  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:38:07.619936  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:38:07.634458  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35067
	I0916 13:38:07.634853  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:38:07.635286  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:38:07.635307  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:38:07.635623  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:38:07.635817  735111 main.go:141] libmachine: (ha-190751) Calling .GetState
	I0916 13:38:07.637120  735111 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:38:07.637408  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:38:07.637440  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:38:07.651391  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44839
	I0916 13:38:07.651748  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:38:07.652159  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:38:07.652180  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:38:07.652503  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:38:07.652658  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:38:07.652807  735111 certs.go:68] Setting up /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751 for IP: 192.168.39.192
	I0916 13:38:07.652823  735111 certs.go:194] generating shared ca certs ...
	I0916 13:38:07.652839  735111 certs.go:226] acquiring lock for ca certs: {Name:mk25b35916ff3ff3777938e3e2b7794965f8a707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:38:07.652987  735111 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key
	I0916 13:38:07.653037  735111 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key
	I0916 13:38:07.653049  735111 certs.go:256] generating profile certs ...
	I0916 13:38:07.653138  735111 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.key
	I0916 13:38:07.653170  735111 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.8feb7412
	I0916 13:38:07.653190  735111 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.8feb7412 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.94 192.168.39.192 192.168.39.254]
	I0916 13:38:07.764013  735111 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.8feb7412 ...
	I0916 13:38:07.764044  735111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.8feb7412: {Name:mk58560f2a84b27105eff3bc12cf91cf12104359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:38:07.764267  735111 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.8feb7412 ...
	I0916 13:38:07.764285  735111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.8feb7412: {Name:mk657f19070c49dca56345e0ae2a1dcf27308040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:38:07.764391  735111 certs.go:381] copying /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.8feb7412 -> /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt
	I0916 13:38:07.764569  735111 certs.go:385] copying /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.8feb7412 -> /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key
	I0916 13:38:07.764766  735111 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key
	I0916 13:38:07.764785  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 13:38:07.764804  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 13:38:07.764831  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 13:38:07.764848  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 13:38:07.764865  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 13:38:07.764879  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 13:38:07.764896  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 13:38:07.764913  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 13:38:07.764992  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem (1338 bytes)
	W0916 13:38:07.765036  735111 certs.go:480] ignoring /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544_empty.pem, impossibly tiny 0 bytes
	I0916 13:38:07.765050  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 13:38:07.765080  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem (1082 bytes)
	I0916 13:38:07.765113  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem (1123 bytes)
	I0916 13:38:07.765145  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem (1679 bytes)
	I0916 13:38:07.765197  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 13:38:07.765232  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> /usr/share/ca-certificates/7205442.pem
	I0916 13:38:07.765253  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:38:07.765271  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem -> /usr/share/ca-certificates/720544.pem
	I0916 13:38:07.765309  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:38:07.767870  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:38:07.768261  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:38:07.768284  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:38:07.768510  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:38:07.768700  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:38:07.768842  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:38:07.768975  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:38:07.849931  735111 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 13:38:07.855030  735111 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 13:38:07.866970  735111 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 13:38:07.871340  735111 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0916 13:38:07.883132  735111 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 13:38:07.887581  735111 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 13:38:07.898269  735111 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 13:38:07.902673  735111 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0916 13:38:07.913972  735111 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 13:38:07.918388  735111 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 13:38:07.928944  735111 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 13:38:07.933508  735111 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0916 13:38:07.943498  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 13:38:07.968310  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 13:38:07.991824  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 13:38:08.014029  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 13:38:08.036224  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 13:38:08.058343  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 13:38:08.080985  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 13:38:08.103508  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 13:38:08.125691  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /usr/share/ca-certificates/7205442.pem (1708 bytes)
	I0916 13:38:08.148890  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 13:38:08.170558  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem --> /usr/share/ca-certificates/720544.pem (1338 bytes)
	I0916 13:38:08.192449  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 13:38:08.208626  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0916 13:38:08.227317  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 13:38:08.246057  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0916 13:38:08.262149  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 13:38:08.277743  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0916 13:38:08.294944  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 13:38:08.310828  735111 ssh_runner.go:195] Run: openssl version
	I0916 13:38:08.316330  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 13:38:08.326533  735111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:38:08.330848  735111 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 12:53 /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:38:08.330904  735111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:38:08.336356  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 13:38:08.346444  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/720544.pem && ln -fs /usr/share/ca-certificates/720544.pem /etc/ssl/certs/720544.pem"
	I0916 13:38:08.356609  735111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/720544.pem
	I0916 13:38:08.360738  735111 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 13:33 /usr/share/ca-certificates/720544.pem
	I0916 13:38:08.360786  735111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/720544.pem
	I0916 13:38:08.366029  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/720544.pem /etc/ssl/certs/51391683.0"
	I0916 13:38:08.376215  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7205442.pem && ln -fs /usr/share/ca-certificates/7205442.pem /etc/ssl/certs/7205442.pem"
	I0916 13:38:08.386857  735111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7205442.pem
	I0916 13:38:08.391761  735111 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 13:33 /usr/share/ca-certificates/7205442.pem
	I0916 13:38:08.391820  735111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7205442.pem
	I0916 13:38:08.397361  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7205442.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 13:38:08.409079  735111 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 13:38:08.413300  735111 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 13:38:08.413358  735111 kubeadm.go:934] updating node {m02 192.168.39.192 8443 v1.31.1 crio true true} ...
	I0916 13:38:08.413457  735111 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-190751-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.192
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-190751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 13:38:08.413482  735111 kube-vip.go:115] generating kube-vip config ...
	I0916 13:38:08.413511  735111 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 13:38:08.431179  735111 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 13:38:08.431241  735111 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 13:38:08.431287  735111 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 13:38:08.441183  735111 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 13:38:08.441223  735111 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 13:38:08.450679  735111 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 13:38:08.450701  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 13:38:08.450754  735111 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 13:38:08.450842  735111 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0916 13:38:08.450894  735111 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0916 13:38:08.454948  735111 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0916 13:38:08.454974  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 13:38:09.088897  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 13:38:09.089006  735111 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 13:38:09.093915  735111 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0916 13:38:09.093953  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 13:38:09.262028  735111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:38:09.298220  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 13:38:09.298340  735111 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 13:38:09.305048  735111 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0916 13:38:09.305086  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 13:38:09.689691  735111 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 13:38:09.699624  735111 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0916 13:38:09.715725  735111 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 13:38:09.733713  735111 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0916 13:38:09.751995  735111 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 13:38:09.755951  735111 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 13:38:09.768309  735111 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 13:38:09.903306  735111 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 13:38:09.921100  735111 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:38:09.921542  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:38:09.921603  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:38:09.937177  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37265
	I0916 13:38:09.937561  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:38:09.938063  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:38:09.938092  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:38:09.938518  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:38:09.938725  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:38:09.938876  735111 start.go:317] joinCluster: &{Name:ha-190751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-190751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.192 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 13:38:09.938973  735111 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 13:38:09.938988  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:38:09.942383  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:38:09.942918  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:38:09.942952  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:38:09.943199  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:38:09.943406  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:38:09.943587  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:38:09.943737  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:38:10.088194  735111 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.192 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 13:38:10.088240  735111 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token whudzs.boc3qvd5sgl21n61 --discovery-token-ca-cert-hash sha256:40463d1766828cd98d0b3d82eb62b65ad46ddd558da2fd9e3536672d6eade3c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-190751-m02 --control-plane --apiserver-advertise-address=192.168.39.192 --apiserver-bind-port=8443"
	I0916 13:38:31.686672  735111 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token whudzs.boc3qvd5sgl21n61 --discovery-token-ca-cert-hash sha256:40463d1766828cd98d0b3d82eb62b65ad46ddd558da2fd9e3536672d6eade3c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-190751-m02 --control-plane --apiserver-advertise-address=192.168.39.192 --apiserver-bind-port=8443": (21.59840385s)
	I0916 13:38:31.686721  735111 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 13:38:32.210939  735111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-190751-m02 minikube.k8s.io/updated_at=2024_09_16T13_38_32_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86 minikube.k8s.io/name=ha-190751 minikube.k8s.io/primary=false
	I0916 13:38:32.330736  735111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-190751-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 13:38:32.473220  735111 start.go:319] duration metric: took 22.53433791s to joinCluster
	I0916 13:38:32.473301  735111 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.192 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 13:38:32.473638  735111 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:38:32.475796  735111 out.go:177] * Verifying Kubernetes components...
	I0916 13:38:32.477071  735111 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 13:38:32.708074  735111 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 13:38:32.732989  735111 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19652-713072/kubeconfig
	I0916 13:38:32.733289  735111 kapi.go:59] client config for ha-190751: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.crt", KeyFile:"/home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.key", CAFile:"/home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 13:38:32.733358  735111 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.94:8443
	I0916 13:38:32.733654  735111 node_ready.go:35] waiting up to 6m0s for node "ha-190751-m02" to be "Ready" ...
	I0916 13:38:32.733792  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:32.733802  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:32.733816  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:32.733821  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:32.743487  735111 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0916 13:38:33.234052  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:33.234084  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:33.234096  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:33.234101  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:33.248083  735111 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0916 13:38:33.733904  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:33.733929  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:33.733942  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:33.733947  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:33.738779  735111 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 13:38:34.234664  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:34.234686  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:34.234693  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:34.234698  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:34.239999  735111 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 13:38:34.734843  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:34.734865  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:34.734877  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:34.734880  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:34.738691  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:34.739212  735111 node_ready.go:53] node "ha-190751-m02" has status "Ready":"False"
	I0916 13:38:35.234902  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:35.234925  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:35.234933  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:35.234937  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:35.248275  735111 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0916 13:38:35.733866  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:35.733890  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:35.733899  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:35.733903  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:35.737774  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:36.234952  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:36.234978  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:36.234987  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:36.234991  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:36.239485  735111 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 13:38:36.733892  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:36.733924  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:36.733935  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:36.733942  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:36.737219  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:37.234760  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:37.234784  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:37.234793  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:37.234797  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:37.237476  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:37.238039  735111 node_ready.go:53] node "ha-190751-m02" has status "Ready":"False"
	I0916 13:38:37.734751  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:37.734776  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:37.734787  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:37.734793  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:37.737512  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:38.234526  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:38.234555  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:38.234566  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:38.234571  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:38.237472  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:38.734671  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:38.734693  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:38.734701  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:38.734704  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:38.738203  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:39.233903  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:39.233930  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:39.233939  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:39.233945  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:39.238849  735111 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 13:38:39.239407  735111 node_ready.go:53] node "ha-190751-m02" has status "Ready":"False"
	I0916 13:38:39.734899  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:39.734925  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:39.734934  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:39.734939  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:39.737985  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:40.234645  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:40.234672  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:40.234681  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:40.234685  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:40.239039  735111 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 13:38:40.734018  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:40.734050  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:40.734062  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:40.734067  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:40.737361  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:41.234709  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:41.234731  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:41.234738  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:41.234742  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:41.238698  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:41.734406  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:41.734430  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:41.734441  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:41.734447  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:41.737719  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:41.738587  735111 node_ready.go:53] node "ha-190751-m02" has status "Ready":"False"
	I0916 13:38:42.234046  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:42.234072  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:42.234090  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:42.234096  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:42.237631  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:42.734809  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:42.734833  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:42.734841  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:42.734846  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:42.738196  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:43.234205  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:43.234231  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:43.234241  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:43.234245  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:43.238473  735111 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 13:38:43.734653  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:43.734681  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:43.734693  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:43.734700  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:43.737734  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:44.234881  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:44.234907  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:44.234923  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:44.234930  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:44.237991  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:44.238553  735111 node_ready.go:53] node "ha-190751-m02" has status "Ready":"False"
	I0916 13:38:44.733911  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:44.733933  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:44.733941  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:44.733945  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:44.736682  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:45.233969  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:45.233992  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:45.234000  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:45.234005  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:45.237902  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:45.734865  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:45.734888  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:45.734899  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:45.734902  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:45.738198  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:46.233935  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:46.233961  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:46.233972  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:46.233979  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:46.237819  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:46.238605  735111 node_ready.go:53] node "ha-190751-m02" has status "Ready":"False"
	I0916 13:38:46.733950  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:46.733974  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:46.733987  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:46.733995  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:46.737023  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:47.234426  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:47.234450  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:47.234458  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:47.234461  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:47.237977  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:47.238653  735111 node_ready.go:49] node "ha-190751-m02" has status "Ready":"True"
	I0916 13:38:47.238672  735111 node_ready.go:38] duration metric: took 14.50498186s for node "ha-190751-m02" to be "Ready" ...
	I0916 13:38:47.238681  735111 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 13:38:47.238758  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods
	I0916 13:38:47.238770  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:47.238779  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:47.238781  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:47.241850  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:47.249481  735111 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9lw8n" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:47.249553  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw8n
	I0916 13:38:47.249562  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:47.249571  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:47.249575  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:47.251850  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:47.252467  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:38:47.252484  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:47.252493  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:47.252500  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:47.254527  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:47.254963  735111 pod_ready.go:93] pod "coredns-7c65d6cfc9-9lw8n" in "kube-system" namespace has status "Ready":"True"
	I0916 13:38:47.254978  735111 pod_ready.go:82] duration metric: took 5.476574ms for pod "coredns-7c65d6cfc9-9lw8n" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:47.254986  735111 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gzkpj" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:47.255032  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-gzkpj
	I0916 13:38:47.255039  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:47.255047  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:47.255049  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:47.256840  735111 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 13:38:47.257430  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:38:47.257444  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:47.257451  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:47.257455  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:47.259455  735111 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 13:38:47.260052  735111 pod_ready.go:93] pod "coredns-7c65d6cfc9-gzkpj" in "kube-system" namespace has status "Ready":"True"
	I0916 13:38:47.260066  735111 pod_ready.go:82] duration metric: took 5.074604ms for pod "coredns-7c65d6cfc9-gzkpj" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:47.260075  735111 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:47.260116  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/etcd-ha-190751
	I0916 13:38:47.260124  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:47.260130  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:47.260134  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:47.262250  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:47.262686  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:38:47.262699  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:47.262706  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:47.262710  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:47.264543  735111 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 13:38:47.264871  735111 pod_ready.go:93] pod "etcd-ha-190751" in "kube-system" namespace has status "Ready":"True"
	I0916 13:38:47.264885  735111 pod_ready.go:82] duration metric: took 4.80542ms for pod "etcd-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:47.264893  735111 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:47.264930  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/etcd-ha-190751-m02
	I0916 13:38:47.264937  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:47.264943  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:47.264946  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:47.266896  735111 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 13:38:47.267650  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:47.267664  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:47.267671  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:47.267676  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:47.269655  735111 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 13:38:47.270430  735111 pod_ready.go:93] pod "etcd-ha-190751-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 13:38:47.270447  735111 pod_ready.go:82] duration metric: took 5.54867ms for pod "etcd-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:47.270464  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:47.434908  735111 request.go:632] Waited for 164.351719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-190751
	I0916 13:38:47.434966  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-190751
	I0916 13:38:47.434972  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:47.434979  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:47.434982  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:47.437981  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:47.635096  735111 request.go:632] Waited for 196.347109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:38:47.635183  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:38:47.635190  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:47.635200  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:47.635209  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:47.637835  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:47.638549  735111 pod_ready.go:93] pod "kube-apiserver-ha-190751" in "kube-system" namespace has status "Ready":"True"
	I0916 13:38:47.638573  735111 pod_ready.go:82] duration metric: took 368.102477ms for pod "kube-apiserver-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:47.638583  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:47.835392  735111 request.go:632] Waited for 196.733194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-190751-m02
	I0916 13:38:47.835483  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-190751-m02
	I0916 13:38:47.835488  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:47.835496  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:47.835500  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:47.838587  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:48.034836  735111 request.go:632] Waited for 195.365767ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:48.034892  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:48.034897  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:48.034904  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:48.034909  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:48.037912  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:48.038587  735111 pod_ready.go:93] pod "kube-apiserver-ha-190751-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 13:38:48.038604  735111 pod_ready.go:82] duration metric: took 400.01422ms for pod "kube-apiserver-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:48.038612  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:48.234735  735111 request.go:632] Waited for 196.056514ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-190751
	I0916 13:38:48.234801  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-190751
	I0916 13:38:48.234806  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:48.234813  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:48.234817  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:48.237710  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:48.434847  735111 request.go:632] Waited for 196.364736ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:38:48.434931  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:38:48.434937  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:48.434945  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:48.434949  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:48.438033  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:48.438805  735111 pod_ready.go:93] pod "kube-controller-manager-ha-190751" in "kube-system" namespace has status "Ready":"True"
	I0916 13:38:48.438826  735111 pod_ready.go:82] duration metric: took 400.207153ms for pod "kube-controller-manager-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:48.438836  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:48.634856  735111 request.go:632] Waited for 195.950058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-190751-m02
	I0916 13:38:48.634915  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-190751-m02
	I0916 13:38:48.634922  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:48.634930  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:48.634934  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:48.638002  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:48.835350  735111 request.go:632] Waited for 196.358659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:48.835415  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:48.835421  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:48.835427  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:48.835431  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:48.838502  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:48.839040  735111 pod_ready.go:93] pod "kube-controller-manager-ha-190751-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 13:38:48.839057  735111 pod_ready.go:82] duration metric: took 400.214991ms for pod "kube-controller-manager-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:48.839066  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-24q9n" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:49.035145  735111 request.go:632] Waited for 195.967255ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-proxy-24q9n
	I0916 13:38:49.035205  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-proxy-24q9n
	I0916 13:38:49.035211  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:49.035219  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:49.035224  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:49.038680  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:49.234891  735111 request.go:632] Waited for 195.359474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:49.234967  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:49.234972  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:49.234980  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:49.234984  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:49.238513  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:49.239095  735111 pod_ready.go:93] pod "kube-proxy-24q9n" in "kube-system" namespace has status "Ready":"True"
	I0916 13:38:49.239112  735111 pod_ready.go:82] duration metric: took 400.039577ms for pod "kube-proxy-24q9n" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:49.239121  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9d7kt" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:49.435296  735111 request.go:632] Waited for 196.076536ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9d7kt
	I0916 13:38:49.435369  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9d7kt
	I0916 13:38:49.435377  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:49.435391  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:49.435400  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:49.438652  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:49.634610  735111 request.go:632] Waited for 195.295347ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:38:49.634669  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:38:49.634674  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:49.634682  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:49.634685  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:49.637513  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:49.637928  735111 pod_ready.go:93] pod "kube-proxy-9d7kt" in "kube-system" namespace has status "Ready":"True"
	I0916 13:38:49.637947  735111 pod_ready.go:82] duration metric: took 398.820171ms for pod "kube-proxy-9d7kt" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:49.637955  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:49.835050  735111 request.go:632] Waited for 197.017122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-190751
	I0916 13:38:49.835113  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-190751
	I0916 13:38:49.835118  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:49.835126  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:49.835131  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:49.837981  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:50.034991  735111 request.go:632] Waited for 196.406773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:38:50.035048  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:38:50.035053  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:50.035059  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:50.035063  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:50.038370  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:50.038828  735111 pod_ready.go:93] pod "kube-scheduler-ha-190751" in "kube-system" namespace has status "Ready":"True"
	I0916 13:38:50.038845  735111 pod_ready.go:82] duration metric: took 400.884474ms for pod "kube-scheduler-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:50.038853  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:50.235000  735111 request.go:632] Waited for 196.046513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-190751-m02
	I0916 13:38:50.235060  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-190751-m02
	I0916 13:38:50.235065  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:50.235072  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:50.235076  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:50.240407  735111 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 13:38:50.435277  735111 request.go:632] Waited for 194.360733ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:50.435339  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:38:50.435344  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:50.435358  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:50.435364  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:50.438173  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:38:50.438657  735111 pod_ready.go:93] pod "kube-scheduler-ha-190751-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 13:38:50.438675  735111 pod_ready.go:82] duration metric: took 399.816261ms for pod "kube-scheduler-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:38:50.438685  735111 pod_ready.go:39] duration metric: took 3.19999197s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 13:38:50.438699  735111 api_server.go:52] waiting for apiserver process to appear ...
	I0916 13:38:50.438752  735111 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 13:38:50.456008  735111 api_server.go:72] duration metric: took 17.982669041s to wait for apiserver process to appear ...
	I0916 13:38:50.456030  735111 api_server.go:88] waiting for apiserver healthz status ...
	I0916 13:38:50.456054  735111 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8443/healthz ...
	I0916 13:38:50.460008  735111 api_server.go:279] https://192.168.39.94:8443/healthz returned 200:
	ok
	I0916 13:38:50.460062  735111 round_trippers.go:463] GET https://192.168.39.94:8443/version
	I0916 13:38:50.460067  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:50.460074  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:50.460079  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:50.460856  735111 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 13:38:50.460955  735111 api_server.go:141] control plane version: v1.31.1
	I0916 13:38:50.460971  735111 api_server.go:131] duration metric: took 4.934707ms to wait for apiserver health ...
	I0916 13:38:50.460978  735111 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 13:38:50.635378  735111 request.go:632] Waited for 174.309285ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods
	I0916 13:38:50.635436  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods
	I0916 13:38:50.635441  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:50.635448  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:50.635452  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:50.639465  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:50.644353  735111 system_pods.go:59] 17 kube-system pods found
	I0916 13:38:50.644386  735111 system_pods.go:61] "coredns-7c65d6cfc9-9lw8n" [19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9] Running
	I0916 13:38:50.644394  735111 system_pods.go:61] "coredns-7c65d6cfc9-gzkpj" [4e0ada83-1020-4bd4-be70-9a1a5972ff59] Running
	I0916 13:38:50.644399  735111 system_pods.go:61] "etcd-ha-190751" [be88be37-91ce-48e8-9f8b-d3103b49ba3c] Running
	I0916 13:38:50.644404  735111 system_pods.go:61] "etcd-ha-190751-m02" [12d190fd-ee89-4c15-9807-992ea738cbf8] Running
	I0916 13:38:50.644409  735111 system_pods.go:61] "kindnet-gpb96" [bb699362-acf1-471c-8b39-8a7498a7da52] Running
	I0916 13:38:50.644414  735111 system_pods.go:61] "kindnet-qfl9j" [c3185688-2626-48af-9067-60c59d3fc806] Running
	I0916 13:38:50.644419  735111 system_pods.go:61] "kube-apiserver-ha-190751" [c91fdd4e-99d4-4130-8240-0ae5f9339cd0] Running
	I0916 13:38:50.644425  735111 system_pods.go:61] "kube-apiserver-ha-190751-m02" [bdbe2c9a-88c9-468e-b902-daddcf463dad] Running
	I0916 13:38:50.644430  735111 system_pods.go:61] "kube-controller-manager-ha-190751" [fefa0f76-38b3-4138-8e0a-d9ac18bdbeac] Running
	I0916 13:38:50.644437  735111 system_pods.go:61] "kube-controller-manager-ha-190751-m02" [22abf056-bbbc-4702-aed6-60aa470bc87d] Running
	I0916 13:38:50.644444  735111 system_pods.go:61] "kube-proxy-24q9n" [12db4b5d-002f-4e38-95a1-3b12747c80a3] Running
	I0916 13:38:50.644450  735111 system_pods.go:61] "kube-proxy-9d7kt" [ba8c34d1-5931-4e70-8d01-798817397f78] Running
	I0916 13:38:50.644456  735111 system_pods.go:61] "kube-scheduler-ha-190751" [677eae56-307b-4bef-939e-5eae5b8a3fff] Running
	I0916 13:38:50.644462  735111 system_pods.go:61] "kube-scheduler-ha-190751-m02" [9c09f981-ca69-420f-87c7-2a9c6692b9d7] Running
	I0916 13:38:50.644471  735111 system_pods.go:61] "kube-vip-ha-190751" [d979d6e0-d0db-4fe1-a8e7-d8e361f20a88] Running
	I0916 13:38:50.644479  735111 system_pods.go:61] "kube-vip-ha-190751-m02" [1c08285c-dafc-45f7-b1b3-dc86bf623fde] Running
	I0916 13:38:50.644487  735111 system_pods.go:61] "storage-provisioner" [f01b81dc-2ff8-41de-8c63-e09a0ead6545] Running
	I0916 13:38:50.644495  735111 system_pods.go:74] duration metric: took 183.510256ms to wait for pod list to return data ...
	I0916 13:38:50.644507  735111 default_sa.go:34] waiting for default service account to be created ...
	I0916 13:38:50.834929  735111 request.go:632] Waited for 190.338146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/default/serviceaccounts
	I0916 13:38:50.834990  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/default/serviceaccounts
	I0916 13:38:50.834996  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:50.835004  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:50.835008  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:50.838515  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:50.838779  735111 default_sa.go:45] found service account: "default"
	I0916 13:38:50.838798  735111 default_sa.go:55] duration metric: took 194.284036ms for default service account to be created ...
	I0916 13:38:50.838808  735111 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 13:38:51.035256  735111 request.go:632] Waited for 196.366226ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods
	I0916 13:38:51.035349  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods
	I0916 13:38:51.035359  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:51.035373  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:51.035383  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:51.039582  735111 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 13:38:51.044181  735111 system_pods.go:86] 17 kube-system pods found
	I0916 13:38:51.044208  735111 system_pods.go:89] "coredns-7c65d6cfc9-9lw8n" [19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9] Running
	I0916 13:38:51.044216  735111 system_pods.go:89] "coredns-7c65d6cfc9-gzkpj" [4e0ada83-1020-4bd4-be70-9a1a5972ff59] Running
	I0916 13:38:51.044221  735111 system_pods.go:89] "etcd-ha-190751" [be88be37-91ce-48e8-9f8b-d3103b49ba3c] Running
	I0916 13:38:51.044227  735111 system_pods.go:89] "etcd-ha-190751-m02" [12d190fd-ee89-4c15-9807-992ea738cbf8] Running
	I0916 13:38:51.044232  735111 system_pods.go:89] "kindnet-gpb96" [bb699362-acf1-471c-8b39-8a7498a7da52] Running
	I0916 13:38:51.044238  735111 system_pods.go:89] "kindnet-qfl9j" [c3185688-2626-48af-9067-60c59d3fc806] Running
	I0916 13:38:51.044243  735111 system_pods.go:89] "kube-apiserver-ha-190751" [c91fdd4e-99d4-4130-8240-0ae5f9339cd0] Running
	I0916 13:38:51.044249  735111 system_pods.go:89] "kube-apiserver-ha-190751-m02" [bdbe2c9a-88c9-468e-b902-daddcf463dad] Running
	I0916 13:38:51.044259  735111 system_pods.go:89] "kube-controller-manager-ha-190751" [fefa0f76-38b3-4138-8e0a-d9ac18bdbeac] Running
	I0916 13:38:51.044270  735111 system_pods.go:89] "kube-controller-manager-ha-190751-m02" [22abf056-bbbc-4702-aed6-60aa470bc87d] Running
	I0916 13:38:51.044276  735111 system_pods.go:89] "kube-proxy-24q9n" [12db4b5d-002f-4e38-95a1-3b12747c80a3] Running
	I0916 13:38:51.044285  735111 system_pods.go:89] "kube-proxy-9d7kt" [ba8c34d1-5931-4e70-8d01-798817397f78] Running
	I0916 13:38:51.044290  735111 system_pods.go:89] "kube-scheduler-ha-190751" [677eae56-307b-4bef-939e-5eae5b8a3fff] Running
	I0916 13:38:51.044295  735111 system_pods.go:89] "kube-scheduler-ha-190751-m02" [9c09f981-ca69-420f-87c7-2a9c6692b9d7] Running
	I0916 13:38:51.044301  735111 system_pods.go:89] "kube-vip-ha-190751" [d979d6e0-d0db-4fe1-a8e7-d8e361f20a88] Running
	I0916 13:38:51.044306  735111 system_pods.go:89] "kube-vip-ha-190751-m02" [1c08285c-dafc-45f7-b1b3-dc86bf623fde] Running
	I0916 13:38:51.044314  735111 system_pods.go:89] "storage-provisioner" [f01b81dc-2ff8-41de-8c63-e09a0ead6545] Running
	I0916 13:38:51.044327  735111 system_pods.go:126] duration metric: took 205.507719ms to wait for k8s-apps to be running ...
	I0916 13:38:51.044339  735111 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 13:38:51.044389  735111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:38:51.066353  735111 system_svc.go:56] duration metric: took 22.003735ms WaitForService to wait for kubelet
	I0916 13:38:51.066383  735111 kubeadm.go:582] duration metric: took 18.593051314s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 13:38:51.066407  735111 node_conditions.go:102] verifying NodePressure condition ...
	I0916 13:38:51.234843  735111 request.go:632] Waited for 168.334045ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes
	I0916 13:38:51.234899  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes
	I0916 13:38:51.234903  735111 round_trippers.go:469] Request Headers:
	I0916 13:38:51.234911  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:38:51.234916  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:38:51.238476  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:38:51.239346  735111 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 13:38:51.239378  735111 node_conditions.go:123] node cpu capacity is 2
	I0916 13:38:51.239395  735111 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 13:38:51.239400  735111 node_conditions.go:123] node cpu capacity is 2
	I0916 13:38:51.239408  735111 node_conditions.go:105] duration metric: took 172.993764ms to run NodePressure ...
	I0916 13:38:51.239469  735111 start.go:241] waiting for startup goroutines ...
	I0916 13:38:51.239512  735111 start.go:255] writing updated cluster config ...
	I0916 13:38:51.241713  735111 out.go:201] 
	I0916 13:38:51.243012  735111 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:38:51.243130  735111 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/config.json ...
	I0916 13:38:51.244505  735111 out.go:177] * Starting "ha-190751-m03" control-plane node in "ha-190751" cluster
	I0916 13:38:51.245537  735111 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 13:38:51.245555  735111 cache.go:56] Caching tarball of preloaded images
	I0916 13:38:51.245661  735111 preload.go:172] Found /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 13:38:51.245690  735111 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 13:38:51.245781  735111 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/config.json ...
	I0916 13:38:51.245930  735111 start.go:360] acquireMachinesLock for ha-190751-m03: {Name:mke8f8f8ba61009cdea7a3d88b50b9f6ae6e1362 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 13:38:51.245973  735111 start.go:364] duration metric: took 24.574µs to acquireMachinesLock for "ha-190751-m03"
	I0916 13:38:51.245996  735111 start.go:93] Provisioning new machine with config: &{Name:ha-190751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-190751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.192 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 13:38:51.246082  735111 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0916 13:38:51.247441  735111 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 13:38:51.247524  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:38:51.247560  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:38:51.262736  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42935
	I0916 13:38:51.263173  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:38:51.263642  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:38:51.263660  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:38:51.263945  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:38:51.264127  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetMachineName
	I0916 13:38:51.264232  735111 main.go:141] libmachine: (ha-190751-m03) Calling .DriverName
	I0916 13:38:51.264361  735111 start.go:159] libmachine.API.Create for "ha-190751" (driver="kvm2")
	I0916 13:38:51.264396  735111 client.go:168] LocalClient.Create starting
	I0916 13:38:51.264433  735111 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem
	I0916 13:38:51.264469  735111 main.go:141] libmachine: Decoding PEM data...
	I0916 13:38:51.264484  735111 main.go:141] libmachine: Parsing certificate...
	I0916 13:38:51.264535  735111 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem
	I0916 13:38:51.264552  735111 main.go:141] libmachine: Decoding PEM data...
	I0916 13:38:51.264562  735111 main.go:141] libmachine: Parsing certificate...
	I0916 13:38:51.264579  735111 main.go:141] libmachine: Running pre-create checks...
	I0916 13:38:51.264586  735111 main.go:141] libmachine: (ha-190751-m03) Calling .PreCreateCheck
	I0916 13:38:51.264747  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetConfigRaw
	I0916 13:38:51.265081  735111 main.go:141] libmachine: Creating machine...
	I0916 13:38:51.265094  735111 main.go:141] libmachine: (ha-190751-m03) Calling .Create
	I0916 13:38:51.265268  735111 main.go:141] libmachine: (ha-190751-m03) Creating KVM machine...
	I0916 13:38:51.266521  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found existing default KVM network
	I0916 13:38:51.266625  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found existing private KVM network mk-ha-190751
	I0916 13:38:51.266723  735111 main.go:141] libmachine: (ha-190751-m03) Setting up store path in /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03 ...
	I0916 13:38:51.266747  735111 main.go:141] libmachine: (ha-190751-m03) Building disk image from file:///home/jenkins/minikube-integration/19652-713072/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 13:38:51.266827  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:51.266719  735844 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 13:38:51.266915  735111 main.go:141] libmachine: (ha-190751-m03) Downloading /home/jenkins/minikube-integration/19652-713072/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19652-713072/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 13:38:51.537695  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:51.537521  735844 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/id_rsa...
	I0916 13:38:51.682729  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:51.682629  735844 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/ha-190751-m03.rawdisk...
	I0916 13:38:51.682756  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Writing magic tar header
	I0916 13:38:51.682769  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Writing SSH key tar header
	I0916 13:38:51.682778  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:51.682750  735844 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03 ...
	I0916 13:38:51.682886  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03
	I0916 13:38:51.682914  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072/.minikube/machines
	I0916 13:38:51.682926  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 13:38:51.682942  735111 main.go:141] libmachine: (ha-190751-m03) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03 (perms=drwx------)
	I0916 13:38:51.682963  735111 main.go:141] libmachine: (ha-190751-m03) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072/.minikube/machines (perms=drwxr-xr-x)
	I0916 13:38:51.682974  735111 main.go:141] libmachine: (ha-190751-m03) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072/.minikube (perms=drwxr-xr-x)
	I0916 13:38:51.682989  735111 main.go:141] libmachine: (ha-190751-m03) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072 (perms=drwxrwxr-x)
	I0916 13:38:51.683003  735111 main.go:141] libmachine: (ha-190751-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 13:38:51.683014  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072
	I0916 13:38:51.683026  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 13:38:51.683037  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Checking permissions on dir: /home/jenkins
	I0916 13:38:51.683047  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Checking permissions on dir: /home
	I0916 13:38:51.683057  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Skipping /home - not owner
	I0916 13:38:51.683066  735111 main.go:141] libmachine: (ha-190751-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 13:38:51.683076  735111 main.go:141] libmachine: (ha-190751-m03) Creating domain...
	I0916 13:38:51.683978  735111 main.go:141] libmachine: (ha-190751-m03) define libvirt domain using xml: 
	I0916 13:38:51.684000  735111 main.go:141] libmachine: (ha-190751-m03) <domain type='kvm'>
	I0916 13:38:51.684034  735111 main.go:141] libmachine: (ha-190751-m03)   <name>ha-190751-m03</name>
	I0916 13:38:51.684056  735111 main.go:141] libmachine: (ha-190751-m03)   <memory unit='MiB'>2200</memory>
	I0916 13:38:51.684062  735111 main.go:141] libmachine: (ha-190751-m03)   <vcpu>2</vcpu>
	I0916 13:38:51.684067  735111 main.go:141] libmachine: (ha-190751-m03)   <features>
	I0916 13:38:51.684072  735111 main.go:141] libmachine: (ha-190751-m03)     <acpi/>
	I0916 13:38:51.684078  735111 main.go:141] libmachine: (ha-190751-m03)     <apic/>
	I0916 13:38:51.684083  735111 main.go:141] libmachine: (ha-190751-m03)     <pae/>
	I0916 13:38:51.684090  735111 main.go:141] libmachine: (ha-190751-m03)     
	I0916 13:38:51.684095  735111 main.go:141] libmachine: (ha-190751-m03)   </features>
	I0916 13:38:51.684102  735111 main.go:141] libmachine: (ha-190751-m03)   <cpu mode='host-passthrough'>
	I0916 13:38:51.684106  735111 main.go:141] libmachine: (ha-190751-m03)   
	I0916 13:38:51.684111  735111 main.go:141] libmachine: (ha-190751-m03)   </cpu>
	I0916 13:38:51.684116  735111 main.go:141] libmachine: (ha-190751-m03)   <os>
	I0916 13:38:51.684120  735111 main.go:141] libmachine: (ha-190751-m03)     <type>hvm</type>
	I0916 13:38:51.684127  735111 main.go:141] libmachine: (ha-190751-m03)     <boot dev='cdrom'/>
	I0916 13:38:51.684131  735111 main.go:141] libmachine: (ha-190751-m03)     <boot dev='hd'/>
	I0916 13:38:51.684150  735111 main.go:141] libmachine: (ha-190751-m03)     <bootmenu enable='no'/>
	I0916 13:38:51.684163  735111 main.go:141] libmachine: (ha-190751-m03)   </os>
	I0916 13:38:51.684174  735111 main.go:141] libmachine: (ha-190751-m03)   <devices>
	I0916 13:38:51.684184  735111 main.go:141] libmachine: (ha-190751-m03)     <disk type='file' device='cdrom'>
	I0916 13:38:51.684201  735111 main.go:141] libmachine: (ha-190751-m03)       <source file='/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/boot2docker.iso'/>
	I0916 13:38:51.684217  735111 main.go:141] libmachine: (ha-190751-m03)       <target dev='hdc' bus='scsi'/>
	I0916 13:38:51.684227  735111 main.go:141] libmachine: (ha-190751-m03)       <readonly/>
	I0916 13:38:51.684234  735111 main.go:141] libmachine: (ha-190751-m03)     </disk>
	I0916 13:38:51.684267  735111 main.go:141] libmachine: (ha-190751-m03)     <disk type='file' device='disk'>
	I0916 13:38:51.684291  735111 main.go:141] libmachine: (ha-190751-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 13:38:51.684309  735111 main.go:141] libmachine: (ha-190751-m03)       <source file='/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/ha-190751-m03.rawdisk'/>
	I0916 13:38:51.684321  735111 main.go:141] libmachine: (ha-190751-m03)       <target dev='hda' bus='virtio'/>
	I0916 13:38:51.684329  735111 main.go:141] libmachine: (ha-190751-m03)     </disk>
	I0916 13:38:51.684340  735111 main.go:141] libmachine: (ha-190751-m03)     <interface type='network'>
	I0916 13:38:51.684352  735111 main.go:141] libmachine: (ha-190751-m03)       <source network='mk-ha-190751'/>
	I0916 13:38:51.684365  735111 main.go:141] libmachine: (ha-190751-m03)       <model type='virtio'/>
	I0916 13:38:51.684376  735111 main.go:141] libmachine: (ha-190751-m03)     </interface>
	I0916 13:38:51.684386  735111 main.go:141] libmachine: (ha-190751-m03)     <interface type='network'>
	I0916 13:38:51.684393  735111 main.go:141] libmachine: (ha-190751-m03)       <source network='default'/>
	I0916 13:38:51.684401  735111 main.go:141] libmachine: (ha-190751-m03)       <model type='virtio'/>
	I0916 13:38:51.684410  735111 main.go:141] libmachine: (ha-190751-m03)     </interface>
	I0916 13:38:51.684420  735111 main.go:141] libmachine: (ha-190751-m03)     <serial type='pty'>
	I0916 13:38:51.684432  735111 main.go:141] libmachine: (ha-190751-m03)       <target port='0'/>
	I0916 13:38:51.684446  735111 main.go:141] libmachine: (ha-190751-m03)     </serial>
	I0916 13:38:51.684454  735111 main.go:141] libmachine: (ha-190751-m03)     <console type='pty'>
	I0916 13:38:51.684459  735111 main.go:141] libmachine: (ha-190751-m03)       <target type='serial' port='0'/>
	I0916 13:38:51.684466  735111 main.go:141] libmachine: (ha-190751-m03)     </console>
	I0916 13:38:51.684473  735111 main.go:141] libmachine: (ha-190751-m03)     <rng model='virtio'>
	I0916 13:38:51.684481  735111 main.go:141] libmachine: (ha-190751-m03)       <backend model='random'>/dev/random</backend>
	I0916 13:38:51.684486  735111 main.go:141] libmachine: (ha-190751-m03)     </rng>
	I0916 13:38:51.684493  735111 main.go:141] libmachine: (ha-190751-m03)     
	I0916 13:38:51.684497  735111 main.go:141] libmachine: (ha-190751-m03)     
	I0916 13:38:51.684501  735111 main.go:141] libmachine: (ha-190751-m03)   </devices>
	I0916 13:38:51.684506  735111 main.go:141] libmachine: (ha-190751-m03) </domain>
	I0916 13:38:51.684528  735111 main.go:141] libmachine: (ha-190751-m03) 
	I0916 13:38:51.690532  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:87:7b:da in network default
	I0916 13:38:51.692006  735111 main.go:141] libmachine: (ha-190751-m03) Ensuring networks are active...
	I0916 13:38:51.692023  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:38:51.692718  735111 main.go:141] libmachine: (ha-190751-m03) Ensuring network default is active
	I0916 13:38:51.693016  735111 main.go:141] libmachine: (ha-190751-m03) Ensuring network mk-ha-190751 is active
	I0916 13:38:51.693413  735111 main.go:141] libmachine: (ha-190751-m03) Getting domain xml...
	I0916 13:38:51.694149  735111 main.go:141] libmachine: (ha-190751-m03) Creating domain...
	I0916 13:38:52.898349  735111 main.go:141] libmachine: (ha-190751-m03) Waiting to get IP...
	I0916 13:38:52.899012  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:38:52.899379  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:38:52.899459  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:52.899379  735844 retry.go:31] will retry after 267.73261ms: waiting for machine to come up
	I0916 13:38:53.168962  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:38:53.169450  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:38:53.169477  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:53.169397  735844 retry.go:31] will retry after 355.778778ms: waiting for machine to come up
	I0916 13:38:53.527048  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:38:53.527444  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:38:53.527475  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:53.527403  735844 retry.go:31] will retry after 429.135107ms: waiting for machine to come up
	I0916 13:38:53.958061  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:38:53.958483  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:38:53.958507  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:53.958433  735844 retry.go:31] will retry after 431.318286ms: waiting for machine to come up
	I0916 13:38:54.391723  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:38:54.392132  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:38:54.392154  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:54.392075  735844 retry.go:31] will retry after 601.011895ms: waiting for machine to come up
	I0916 13:38:54.994478  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:38:54.994857  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:38:54.994885  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:54.994816  735844 retry.go:31] will retry after 853.395587ms: waiting for machine to come up
	I0916 13:38:55.849861  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:38:55.850269  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:38:55.850295  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:55.850218  735844 retry.go:31] will retry after 1.068824601s: waiting for machine to come up
	I0916 13:38:56.920153  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:38:56.920525  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:38:56.920556  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:56.920497  735844 retry.go:31] will retry after 1.007149511s: waiting for machine to come up
	I0916 13:38:57.929630  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:38:57.930174  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:38:57.930196  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:57.930118  735844 retry.go:31] will retry after 1.469842637s: waiting for machine to come up
	I0916 13:38:59.401026  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:38:59.401415  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:38:59.401440  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:38:59.401380  735844 retry.go:31] will retry after 2.104821665s: waiting for machine to come up
	I0916 13:39:01.507676  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:01.508197  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:39:01.508228  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:39:01.508132  735844 retry.go:31] will retry after 2.346855381s: waiting for machine to come up
	I0916 13:39:03.857755  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:03.858275  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:39:03.858329  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:39:03.858228  735844 retry.go:31] will retry after 3.255293037s: waiting for machine to come up
	I0916 13:39:07.114891  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:07.115304  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:39:07.115323  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:39:07.115261  735844 retry.go:31] will retry after 3.528582737s: waiting for machine to come up
	I0916 13:39:10.646649  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:10.647143  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find current IP address of domain ha-190751-m03 in network mk-ha-190751
	I0916 13:39:10.647171  735111 main.go:141] libmachine: (ha-190751-m03) DBG | I0916 13:39:10.647092  735844 retry.go:31] will retry after 3.488162223s: waiting for machine to come up
	I0916 13:39:14.138431  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:14.138871  735111 main.go:141] libmachine: (ha-190751-m03) Found IP for machine: 192.168.39.134
	I0916 13:39:14.138913  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has current primary IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:14.138922  735111 main.go:141] libmachine: (ha-190751-m03) Reserving static IP address...
	I0916 13:39:14.139293  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find host DHCP lease matching {name: "ha-190751-m03", mac: "52:54:00:0e:4e:0a", ip: "192.168.39.134"} in network mk-ha-190751
	I0916 13:39:14.210728  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Getting to WaitForSSH function...
	I0916 13:39:14.210765  735111 main.go:141] libmachine: (ha-190751-m03) Reserved static IP address: 192.168.39.134
	I0916 13:39:14.210775  735111 main.go:141] libmachine: (ha-190751-m03) Waiting for SSH to be available...
	I0916 13:39:14.213475  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:14.213855  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751
	I0916 13:39:14.213886  735111 main.go:141] libmachine: (ha-190751-m03) DBG | unable to find defined IP address of network mk-ha-190751 interface with MAC address 52:54:00:0e:4e:0a
	I0916 13:39:14.214225  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Using SSH client type: external
	I0916 13:39:14.214252  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/id_rsa (-rw-------)
	I0916 13:39:14.214278  735111 main.go:141] libmachine: (ha-190751-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 13:39:14.214293  735111 main.go:141] libmachine: (ha-190751-m03) DBG | About to run SSH command:
	I0916 13:39:14.214314  735111 main.go:141] libmachine: (ha-190751-m03) DBG | exit 0
	I0916 13:39:14.217901  735111 main.go:141] libmachine: (ha-190751-m03) DBG | SSH cmd err, output: exit status 255: 
	I0916 13:39:14.217926  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0916 13:39:14.217942  735111 main.go:141] libmachine: (ha-190751-m03) DBG | command : exit 0
	I0916 13:39:14.217953  735111 main.go:141] libmachine: (ha-190751-m03) DBG | err     : exit status 255
	I0916 13:39:14.217965  735111 main.go:141] libmachine: (ha-190751-m03) DBG | output  : 
	I0916 13:39:17.218981  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Getting to WaitForSSH function...
	I0916 13:39:17.221212  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.221595  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:17.221616  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.221784  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Using SSH client type: external
	I0916 13:39:17.221810  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/id_rsa (-rw-------)
	I0916 13:39:17.221840  735111 main.go:141] libmachine: (ha-190751-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.134 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 13:39:17.221856  735111 main.go:141] libmachine: (ha-190751-m03) DBG | About to run SSH command:
	I0916 13:39:17.221869  735111 main.go:141] libmachine: (ha-190751-m03) DBG | exit 0
	I0916 13:39:17.349568  735111 main.go:141] libmachine: (ha-190751-m03) DBG | SSH cmd err, output: <nil>: 
	I0916 13:39:17.349894  735111 main.go:141] libmachine: (ha-190751-m03) KVM machine creation complete!
	I0916 13:39:17.350159  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetConfigRaw
	I0916 13:39:17.350743  735111 main.go:141] libmachine: (ha-190751-m03) Calling .DriverName
	I0916 13:39:17.350919  735111 main.go:141] libmachine: (ha-190751-m03) Calling .DriverName
	I0916 13:39:17.351092  735111 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 13:39:17.351104  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetState
	I0916 13:39:17.352188  735111 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 13:39:17.352202  735111 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 13:39:17.352209  735111 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 13:39:17.352216  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:39:17.354508  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.354845  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:17.354884  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.355038  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:39:17.355191  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:17.355357  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:17.355512  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:39:17.355653  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:39:17.355852  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0916 13:39:17.355863  735111 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 13:39:17.456888  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 13:39:17.456915  735111 main.go:141] libmachine: Detecting the provisioner...
	I0916 13:39:17.456924  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:39:17.459979  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.460495  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:17.460524  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.460810  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:39:17.461011  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:17.461160  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:17.461326  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:39:17.461494  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:39:17.461705  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0916 13:39:17.461719  735111 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 13:39:17.562014  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 13:39:17.562068  735111 main.go:141] libmachine: found compatible host: buildroot
	I0916 13:39:17.562074  735111 main.go:141] libmachine: Provisioning with buildroot...
	I0916 13:39:17.562082  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetMachineName
	I0916 13:39:17.562340  735111 buildroot.go:166] provisioning hostname "ha-190751-m03"
	I0916 13:39:17.562369  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetMachineName
	I0916 13:39:17.562584  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:39:17.564921  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.565281  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:17.565303  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.565406  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:39:17.565575  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:17.565742  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:17.565889  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:39:17.566033  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:39:17.566231  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0916 13:39:17.566243  735111 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-190751-m03 && echo "ha-190751-m03" | sudo tee /etc/hostname
	I0916 13:39:17.684851  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-190751-m03
	
	I0916 13:39:17.684884  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:39:17.687807  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.688158  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:17.688188  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.688334  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:39:17.688504  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:17.688667  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:17.688820  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:39:17.688969  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:39:17.689174  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0916 13:39:17.689191  735111 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-190751-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-190751-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-190751-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 13:39:17.798755  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 13:39:17.798787  735111 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19652-713072/.minikube CaCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19652-713072/.minikube}
	I0916 13:39:17.798807  735111 buildroot.go:174] setting up certificates
	I0916 13:39:17.798821  735111 provision.go:84] configureAuth start
	I0916 13:39:17.798834  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetMachineName
	I0916 13:39:17.799097  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetIP
	I0916 13:39:17.801945  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.802390  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:17.802418  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.802614  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:39:17.804893  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.805203  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:17.805231  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.805352  735111 provision.go:143] copyHostCerts
	I0916 13:39:17.805387  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem
	I0916 13:39:17.805422  735111 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem, removing ...
	I0916 13:39:17.805430  735111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem
	I0916 13:39:17.805514  735111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem (1082 bytes)
	I0916 13:39:17.805613  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem
	I0916 13:39:17.805639  735111 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem, removing ...
	I0916 13:39:17.805647  735111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem
	I0916 13:39:17.805701  735111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem (1123 bytes)
	I0916 13:39:17.805770  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem
	I0916 13:39:17.805793  735111 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem, removing ...
	I0916 13:39:17.805802  735111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem
	I0916 13:39:17.805836  735111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem (1679 bytes)
	I0916 13:39:17.805906  735111 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem org=jenkins.ha-190751-m03 san=[127.0.0.1 192.168.39.134 ha-190751-m03 localhost minikube]
	I0916 13:39:17.870032  735111 provision.go:177] copyRemoteCerts
	I0916 13:39:17.870099  735111 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 13:39:17.870126  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:39:17.872522  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.872837  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:17.872864  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:17.872980  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:39:17.873152  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:17.873300  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:39:17.873438  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/id_rsa Username:docker}
	I0916 13:39:17.955555  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 13:39:17.955635  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 13:39:17.978952  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 13:39:17.979009  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 13:39:18.001031  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 13:39:18.001082  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 13:39:18.023641  735111 provision.go:87] duration metric: took 224.805023ms to configureAuth
	I0916 13:39:18.023667  735111 buildroot.go:189] setting minikube options for container-runtime
	I0916 13:39:18.023847  735111 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:39:18.023917  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:39:18.026697  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.027058  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:18.027085  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.027295  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:39:18.027491  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:18.027638  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:18.027736  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:39:18.027854  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:39:18.027999  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0916 13:39:18.028012  735111 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 13:39:18.253860  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 13:39:18.253896  735111 main.go:141] libmachine: Checking connection to Docker...
	I0916 13:39:18.253908  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetURL
	I0916 13:39:18.255174  735111 main.go:141] libmachine: (ha-190751-m03) DBG | Using libvirt version 6000000
	I0916 13:39:18.257182  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.257566  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:18.257598  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.257788  735111 main.go:141] libmachine: Docker is up and running!
	I0916 13:39:18.257804  735111 main.go:141] libmachine: Reticulating splines...
	I0916 13:39:18.257812  735111 client.go:171] duration metric: took 26.993406027s to LocalClient.Create
	I0916 13:39:18.257839  735111 start.go:167] duration metric: took 26.993482617s to libmachine.API.Create "ha-190751"
	I0916 13:39:18.257849  735111 start.go:293] postStartSetup for "ha-190751-m03" (driver="kvm2")
	I0916 13:39:18.257862  735111 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 13:39:18.257880  735111 main.go:141] libmachine: (ha-190751-m03) Calling .DriverName
	I0916 13:39:18.258114  735111 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 13:39:18.258140  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:39:18.260112  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.260396  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:18.260424  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.260534  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:39:18.260698  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:18.260863  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:39:18.261006  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/id_rsa Username:docker}
	I0916 13:39:18.339569  735111 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 13:39:18.343728  735111 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 13:39:18.343755  735111 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/addons for local assets ...
	I0916 13:39:18.343830  735111 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/files for local assets ...
	I0916 13:39:18.343929  735111 filesync.go:149] local asset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> 7205442.pem in /etc/ssl/certs
	I0916 13:39:18.343942  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> /etc/ssl/certs/7205442.pem
	I0916 13:39:18.344054  735111 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 13:39:18.352825  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 13:39:18.375620  735111 start.go:296] duration metric: took 117.756033ms for postStartSetup
	I0916 13:39:18.375681  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetConfigRaw
	I0916 13:39:18.376309  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetIP
	I0916 13:39:18.378881  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.379283  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:18.379309  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.379598  735111 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/config.json ...
	I0916 13:39:18.379820  735111 start.go:128] duration metric: took 27.133726733s to createHost
	I0916 13:39:18.379844  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:39:18.382112  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.382511  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:18.382542  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.382687  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:39:18.382870  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:18.383030  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:18.383189  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:39:18.383366  735111 main.go:141] libmachine: Using SSH client type: native
	I0916 13:39:18.383580  735111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0916 13:39:18.383591  735111 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 13:39:18.486014  735111 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726493958.465865548
	
	I0916 13:39:18.486040  735111 fix.go:216] guest clock: 1726493958.465865548
	I0916 13:39:18.486049  735111 fix.go:229] Guest: 2024-09-16 13:39:18.465865548 +0000 UTC Remote: 2024-09-16 13:39:18.379833761 +0000 UTC m=+141.735737766 (delta=86.031787ms)
	I0916 13:39:18.486069  735111 fix.go:200] guest clock delta is within tolerance: 86.031787ms
	I0916 13:39:18.486076  735111 start.go:83] releasing machines lock for "ha-190751-m03", held for 27.240091901s
	I0916 13:39:18.486100  735111 main.go:141] libmachine: (ha-190751-m03) Calling .DriverName
	I0916 13:39:18.486351  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetIP
	I0916 13:39:18.488910  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.489269  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:18.489293  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.491335  735111 out.go:177] * Found network options:
	I0916 13:39:18.492394  735111 out.go:177]   - NO_PROXY=192.168.39.94,192.168.39.192
	W0916 13:39:18.493519  735111 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 13:39:18.493541  735111 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 13:39:18.493559  735111 main.go:141] libmachine: (ha-190751-m03) Calling .DriverName
	I0916 13:39:18.494017  735111 main.go:141] libmachine: (ha-190751-m03) Calling .DriverName
	I0916 13:39:18.494160  735111 main.go:141] libmachine: (ha-190751-m03) Calling .DriverName
	I0916 13:39:18.494258  735111 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 13:39:18.494291  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	W0916 13:39:18.494369  735111 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 13:39:18.494391  735111 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 13:39:18.494456  735111 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 13:39:18.494476  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:39:18.496983  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.497179  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.497422  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:18.497444  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.497573  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:18.497589  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:39:18.497592  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:18.497762  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:18.497774  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:39:18.497943  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:39:18.497959  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:39:18.498092  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:39:18.498128  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/id_rsa Username:docker}
	I0916 13:39:18.498215  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/id_rsa Username:docker}
	I0916 13:39:18.737954  735111 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 13:39:18.744923  735111 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 13:39:18.745001  735111 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 13:39:18.764476  735111 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 13:39:18.764503  735111 start.go:495] detecting cgroup driver to use...
	I0916 13:39:18.764573  735111 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 13:39:18.781234  735111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 13:39:18.794933  735111 docker.go:217] disabling cri-docker service (if available) ...
	I0916 13:39:18.794980  735111 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 13:39:18.808632  735111 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 13:39:18.821849  735111 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 13:39:18.942168  735111 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 13:39:19.094357  735111 docker.go:233] disabling docker service ...
	I0916 13:39:19.094418  735111 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 13:39:19.112538  735111 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 13:39:19.125554  735111 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 13:39:19.260134  735111 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 13:39:19.379363  735111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 13:39:19.393121  735111 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 13:39:19.410931  735111 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 13:39:19.411005  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:39:19.421424  735111 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 13:39:19.421473  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:39:19.431135  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:39:19.440675  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:39:19.451628  735111 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 13:39:19.462860  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:39:19.474046  735111 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:39:19.490880  735111 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:39:19.501369  735111 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 13:39:19.510937  735111 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 13:39:19.510976  735111 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 13:39:19.523965  735111 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 13:39:19.533361  735111 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 13:39:19.658818  735111 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 13:39:19.752488  735111 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 13:39:19.752550  735111 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 13:39:19.757903  735111 start.go:563] Will wait 60s for crictl version
	I0916 13:39:19.757956  735111 ssh_runner.go:195] Run: which crictl
	I0916 13:39:19.762158  735111 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 13:39:19.799468  735111 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 13:39:19.799536  735111 ssh_runner.go:195] Run: crio --version
	I0916 13:39:19.826239  735111 ssh_runner.go:195] Run: crio --version
	I0916 13:39:19.853266  735111 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 13:39:19.854407  735111 out.go:177]   - env NO_PROXY=192.168.39.94
	I0916 13:39:19.855494  735111 out.go:177]   - env NO_PROXY=192.168.39.94,192.168.39.192
	I0916 13:39:19.856378  735111 main.go:141] libmachine: (ha-190751-m03) Calling .GetIP
	I0916 13:39:19.858923  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:19.859322  735111 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:39:19.859348  735111 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:39:19.859587  735111 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 13:39:19.863498  735111 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 13:39:19.875330  735111 mustload.go:65] Loading cluster: ha-190751
	I0916 13:39:19.875549  735111 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:39:19.875792  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:39:19.875829  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:39:19.890796  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46059
	I0916 13:39:19.891172  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:39:19.891639  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:39:19.891659  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:39:19.891993  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:39:19.892178  735111 main.go:141] libmachine: (ha-190751) Calling .GetState
	I0916 13:39:19.893735  735111 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:39:19.894037  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:39:19.894075  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:39:19.908285  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36371
	I0916 13:39:19.908780  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:39:19.909236  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:39:19.909259  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:39:19.909576  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:39:19.909803  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:39:19.909978  735111 certs.go:68] Setting up /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751 for IP: 192.168.39.134
	I0916 13:39:19.909990  735111 certs.go:194] generating shared ca certs ...
	I0916 13:39:19.910004  735111 certs.go:226] acquiring lock for ca certs: {Name:mk25b35916ff3ff3777938e3e2b7794965f8a707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:39:19.910128  735111 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key
	I0916 13:39:19.910172  735111 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key
	I0916 13:39:19.910183  735111 certs.go:256] generating profile certs ...
	I0916 13:39:19.910268  735111 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.key
	I0916 13:39:19.910294  735111 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.4e817689
	I0916 13:39:19.910319  735111 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.4e817689 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.94 192.168.39.192 192.168.39.134 192.168.39.254]
	I0916 13:39:20.158258  735111 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.4e817689 ...
	I0916 13:39:20.158304  735111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.4e817689: {Name:mk8e75c47c0b8af5b7deff3b98169e4c7bff2c28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:39:20.158501  735111 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.4e817689 ...
	I0916 13:39:20.158515  735111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.4e817689: {Name:mk2b6257004806042da85fdc625bc8844312e657 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:39:20.158595  735111 certs.go:381] copying /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.4e817689 -> /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt
	I0916 13:39:20.158739  735111 certs.go:385] copying /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.4e817689 -> /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key
	I0916 13:39:20.158881  735111 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key
	I0916 13:39:20.158898  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 13:39:20.158913  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 13:39:20.158929  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 13:39:20.158944  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 13:39:20.158959  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 13:39:20.158974  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 13:39:20.158989  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 13:39:20.173756  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 13:39:20.173838  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem (1338 bytes)
	W0916 13:39:20.173877  735111 certs.go:480] ignoring /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544_empty.pem, impossibly tiny 0 bytes
	I0916 13:39:20.173890  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 13:39:20.173914  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem (1082 bytes)
	I0916 13:39:20.173940  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem (1123 bytes)
	I0916 13:39:20.173964  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem (1679 bytes)
	I0916 13:39:20.174009  735111 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 13:39:20.174039  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> /usr/share/ca-certificates/7205442.pem
	I0916 13:39:20.174057  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:39:20.174074  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem -> /usr/share/ca-certificates/720544.pem
	I0916 13:39:20.174121  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:39:20.177038  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:39:20.177466  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:39:20.177488  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:39:20.177715  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:39:20.177922  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:39:20.178082  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:39:20.178224  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:39:20.253980  735111 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 13:39:20.260424  735111 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 13:39:20.272373  735111 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 13:39:20.276772  735111 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0916 13:39:20.291797  735111 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 13:39:20.295875  735111 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 13:39:20.306292  735111 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 13:39:20.310789  735111 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0916 13:39:20.320754  735111 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 13:39:20.324536  735111 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 13:39:20.334814  735111 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 13:39:20.338783  735111 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0916 13:39:20.352083  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 13:39:20.380259  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 13:39:20.406780  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 13:39:20.429266  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 13:39:20.452746  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0916 13:39:20.476085  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 13:39:20.498261  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 13:39:20.520565  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 13:39:20.543260  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /usr/share/ca-certificates/7205442.pem (1708 bytes)
	I0916 13:39:20.566634  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 13:39:20.591982  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem --> /usr/share/ca-certificates/720544.pem (1338 bytes)
	I0916 13:39:20.617886  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 13:39:20.636903  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0916 13:39:20.655894  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 13:39:20.673701  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0916 13:39:20.691307  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 13:39:20.708148  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0916 13:39:20.725684  735111 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 13:39:20.741649  735111 ssh_runner.go:195] Run: openssl version
	I0916 13:39:20.747350  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7205442.pem && ln -fs /usr/share/ca-certificates/7205442.pem /etc/ssl/certs/7205442.pem"
	I0916 13:39:20.757640  735111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7205442.pem
	I0916 13:39:20.762088  735111 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 13:33 /usr/share/ca-certificates/7205442.pem
	I0916 13:39:20.762145  735111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7205442.pem
	I0916 13:39:20.768483  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7205442.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 13:39:20.778516  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 13:39:20.788315  735111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:39:20.792414  735111 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 12:53 /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:39:20.792463  735111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:39:20.797561  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 13:39:20.807429  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/720544.pem && ln -fs /usr/share/ca-certificates/720544.pem /etc/ssl/certs/720544.pem"
	I0916 13:39:20.817363  735111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/720544.pem
	I0916 13:39:20.821541  735111 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 13:33 /usr/share/ca-certificates/720544.pem
	I0916 13:39:20.821587  735111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/720544.pem
	I0916 13:39:20.826869  735111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/720544.pem /etc/ssl/certs/51391683.0"
	I0916 13:39:20.836683  735111 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 13:39:20.840506  735111 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 13:39:20.840560  735111 kubeadm.go:934] updating node {m03 192.168.39.134 8443 v1.31.1 crio true true} ...
	I0916 13:39:20.840651  735111 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-190751-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-190751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 13:39:20.840686  735111 kube-vip.go:115] generating kube-vip config ...
	I0916 13:39:20.840723  735111 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 13:39:20.855049  735111 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 13:39:20.855113  735111 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 13:39:20.855153  735111 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 13:39:20.864429  735111 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 13:39:20.864470  735111 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 13:39:20.873475  735111 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 13:39:20.873499  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 13:39:20.873510  735111 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0916 13:39:20.873529  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 13:39:20.873556  735111 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 13:39:20.873573  735111 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 13:39:20.873577  735111 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0916 13:39:20.873617  735111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:39:20.891619  735111 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 13:39:20.891623  735111 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0916 13:39:20.891655  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 13:39:20.891661  735111 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0916 13:39:20.891681  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 13:39:20.891696  735111 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 13:39:20.906537  735111 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0916 13:39:20.906560  735111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 13:39:21.709406  735111 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 13:39:21.719559  735111 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0916 13:39:21.736248  735111 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 13:39:21.753899  735111 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0916 13:39:21.770439  735111 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 13:39:21.774406  735111 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 13:39:21.787696  735111 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 13:39:21.922137  735111 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 13:39:21.938877  735111 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:39:21.939219  735111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:39:21.939287  735111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:39:21.955161  735111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38621
	I0916 13:39:21.955639  735111 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:39:21.956110  735111 main.go:141] libmachine: Using API Version  1
	I0916 13:39:21.956129  735111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:39:21.956492  735111 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:39:21.956670  735111 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:39:21.956836  735111 start.go:317] joinCluster: &{Name:ha-190751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-190751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.192 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 13:39:21.957003  735111 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 13:39:21.957020  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:39:21.959985  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:39:21.960436  735111 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:39:21.960456  735111 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:39:21.960607  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:39:21.960762  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:39:21.960900  735111 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:39:21.961045  735111 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:39:22.126228  735111 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 13:39:22.126281  735111 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ka76j2.9amzatrp4hsrar4a --discovery-token-ca-cert-hash sha256:40463d1766828cd98d0b3d82eb62b65ad46ddd558da2fd9e3536672d6eade3c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-190751-m03 --control-plane --apiserver-advertise-address=192.168.39.134 --apiserver-bind-port=8443"
	I0916 13:39:45.289639  735111 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ka76j2.9amzatrp4hsrar4a --discovery-token-ca-cert-hash sha256:40463d1766828cd98d0b3d82eb62b65ad46ddd558da2fd9e3536672d6eade3c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-190751-m03 --control-plane --apiserver-advertise-address=192.168.39.134 --apiserver-bind-port=8443": (23.163318972s)
	I0916 13:39:45.289714  735111 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 13:39:45.783946  735111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-190751-m03 minikube.k8s.io/updated_at=2024_09_16T13_39_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86 minikube.k8s.io/name=ha-190751 minikube.k8s.io/primary=false
	I0916 13:39:45.960776  735111 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-190751-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 13:39:46.095266  735111 start.go:319] duration metric: took 24.138422609s to joinCluster
	I0916 13:39:46.095373  735111 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 13:39:46.095694  735111 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:39:46.096702  735111 out.go:177] * Verifying Kubernetes components...
	I0916 13:39:46.097722  735111 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 13:39:46.369679  735111 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 13:39:46.407374  735111 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19652-713072/kubeconfig
	I0916 13:39:46.407727  735111 kapi.go:59] client config for ha-190751: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.crt", KeyFile:"/home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.key", CAFile:"/home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 13:39:46.407816  735111 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.94:8443
	I0916 13:39:46.408144  735111 node_ready.go:35] waiting up to 6m0s for node "ha-190751-m03" to be "Ready" ...
	I0916 13:39:46.408241  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:46.408250  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:46.408263  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:46.408274  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:46.411667  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:46.908463  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:46.908493  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:46.908507  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:46.908515  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:46.911963  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:47.408903  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:47.408934  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:47.408944  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:47.408951  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:47.413413  735111 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 13:39:47.909411  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:47.909432  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:47.909441  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:47.909445  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:47.913196  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:48.409224  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:48.409244  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:48.409253  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:48.409260  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:48.412020  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:39:48.412635  735111 node_ready.go:53] node "ha-190751-m03" has status "Ready":"False"
	I0916 13:39:48.909014  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:48.909042  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:48.909054  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:48.909059  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:48.912923  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:49.409193  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:49.409216  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:49.409224  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:49.409228  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:49.412619  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:49.909078  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:49.909099  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:49.909107  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:49.909119  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:49.911692  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:39:50.409259  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:50.409281  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:50.409289  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:50.409295  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:50.412356  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:50.413278  735111 node_ready.go:53] node "ha-190751-m03" has status "Ready":"False"
	I0916 13:39:50.908598  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:50.908623  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:50.908634  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:50.908639  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:50.911506  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:39:51.408413  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:51.408442  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:51.408454  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:51.408462  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:51.411596  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:51.909366  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:51.909389  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:51.909400  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:51.909410  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:51.912625  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:52.409358  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:52.409379  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:52.409387  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:52.409390  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:52.412509  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:52.908543  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:52.908574  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:52.908586  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:52.908593  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:52.912433  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:52.913241  735111 node_ready.go:53] node "ha-190751-m03" has status "Ready":"False"
	I0916 13:39:53.408433  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:53.408459  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:53.408472  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:53.408477  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:53.411673  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:53.908627  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:53.908650  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:53.908659  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:53.908664  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:53.912236  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:54.409247  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:54.409272  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:54.409283  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:54.409290  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:54.412057  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:39:54.908305  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:54.908331  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:54.908340  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:54.908346  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:54.911667  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:55.408456  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:55.408483  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:55.408495  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:55.408501  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:55.411755  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:55.412338  735111 node_ready.go:53] node "ha-190751-m03" has status "Ready":"False"
	I0916 13:39:55.908684  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:55.908707  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:55.908717  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:55.908722  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:55.912000  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:56.409340  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:56.409367  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:56.409377  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:56.409381  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:56.412662  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:56.908456  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:56.908487  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:56.908496  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:56.908500  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:56.912441  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:57.408340  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:57.408367  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:57.408376  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:57.408380  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:57.411606  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:57.909190  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:57.909215  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:57.909222  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:57.909226  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:57.912661  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:57.913318  735111 node_ready.go:53] node "ha-190751-m03" has status "Ready":"False"
	I0916 13:39:58.408607  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:58.408634  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:58.408645  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:58.408650  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:58.412662  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:58.909100  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:58.909121  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:58.909130  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:58.909134  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:58.912004  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:39:59.409198  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:59.409236  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:59.409247  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:59.409260  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:59.412639  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:39:59.908996  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:39:59.909015  735111 round_trippers.go:469] Request Headers:
	I0916 13:39:59.909023  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:39:59.909027  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:39:59.912302  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:00.408791  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:40:00.408817  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:00.408827  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:00.408831  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:00.412656  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:00.413347  735111 node_ready.go:49] node "ha-190751-m03" has status "Ready":"True"
	I0916 13:40:00.413365  735111 node_ready.go:38] duration metric: took 14.005200684s for node "ha-190751-m03" to be "Ready" ...
	I0916 13:40:00.413374  735111 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 13:40:00.413449  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods
	I0916 13:40:00.413458  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:00.413466  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:00.413471  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:00.418583  735111 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 13:40:00.427420  735111 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9lw8n" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:00.427521  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw8n
	I0916 13:40:00.427529  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:00.427537  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:00.427540  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:00.432360  735111 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 13:40:00.433633  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:40:00.433650  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:00.433658  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:00.433664  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:00.436286  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:40:00.436801  735111 pod_ready.go:93] pod "coredns-7c65d6cfc9-9lw8n" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:00.436824  735111 pod_ready.go:82] duration metric: took 9.372689ms for pod "coredns-7c65d6cfc9-9lw8n" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:00.436837  735111 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gzkpj" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:00.436923  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-gzkpj
	I0916 13:40:00.436936  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:00.436953  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:00.436962  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:00.439778  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:40:00.440560  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:40:00.440580  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:00.440591  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:00.440599  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:00.443192  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:40:00.443706  735111 pod_ready.go:93] pod "coredns-7c65d6cfc9-gzkpj" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:00.443721  735111 pod_ready.go:82] duration metric: took 6.871006ms for pod "coredns-7c65d6cfc9-gzkpj" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:00.443730  735111 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:00.443780  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/etcd-ha-190751
	I0916 13:40:00.443786  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:00.443794  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:00.443800  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:00.447753  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:00.448371  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:40:00.448386  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:00.448394  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:00.448399  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:00.451314  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:40:00.451831  735111 pod_ready.go:93] pod "etcd-ha-190751" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:00.451850  735111 pod_ready.go:82] duration metric: took 8.114775ms for pod "etcd-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:00.451860  735111 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:00.451926  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/etcd-ha-190751-m02
	I0916 13:40:00.451933  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:00.451941  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:00.451948  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:00.454389  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:40:00.454905  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:40:00.454919  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:00.454928  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:00.454934  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:00.457592  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:40:00.458235  735111 pod_ready.go:93] pod "etcd-ha-190751-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:00.458256  735111 pod_ready.go:82] duration metric: took 6.386626ms for pod "etcd-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:00.458267  735111 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-190751-m03" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:00.609688  735111 request.go:632] Waited for 151.317138ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/etcd-ha-190751-m03
	I0916 13:40:00.609805  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/etcd-ha-190751-m03
	I0916 13:40:00.609819  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:00.609831  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:00.609840  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:00.612852  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:40:00.809297  735111 request.go:632] Waited for 195.380467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:40:00.809375  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:40:00.809387  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:00.809398  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:00.809406  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:00.812844  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:00.813633  735111 pod_ready.go:93] pod "etcd-ha-190751-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:00.813655  735111 pod_ready.go:82] duration metric: took 355.380709ms for pod "etcd-ha-190751-m03" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:00.813698  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:01.009712  735111 request.go:632] Waited for 195.903853ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-190751
	I0916 13:40:01.009809  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-190751
	I0916 13:40:01.009823  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:01.009834  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:01.009844  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:01.013414  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:01.209519  735111 request.go:632] Waited for 195.355467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:40:01.209596  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:40:01.209603  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:01.209613  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:01.209631  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:01.212826  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:01.213480  735111 pod_ready.go:93] pod "kube-apiserver-ha-190751" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:01.213498  735111 pod_ready.go:82] duration metric: took 399.791444ms for pod "kube-apiserver-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:01.213508  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:01.409070  735111 request.go:632] Waited for 195.469232ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-190751-m02
	I0916 13:40:01.409150  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-190751-m02
	I0916 13:40:01.409155  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:01.409162  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:01.409167  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:01.412916  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:01.609652  735111 request.go:632] Waited for 196.037799ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:40:01.609739  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:40:01.609746  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:01.609762  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:01.609769  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:01.613056  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:01.613647  735111 pod_ready.go:93] pod "kube-apiserver-ha-190751-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:01.613735  735111 pod_ready.go:82] duration metric: took 400.154129ms for pod "kube-apiserver-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:01.613761  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-190751-m03" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:01.809249  735111 request.go:632] Waited for 195.381651ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-190751-m03
	I0916 13:40:01.809338  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-190751-m03
	I0916 13:40:01.809350  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:01.809361  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:01.809369  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:01.813210  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:02.009415  735111 request.go:632] Waited for 195.344265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:40:02.009525  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:40:02.009535  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:02.009550  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:02.009562  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:02.013296  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:02.013804  735111 pod_ready.go:93] pod "kube-apiserver-ha-190751-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:02.013826  735111 pod_ready.go:82] duration metric: took 400.056603ms for pod "kube-apiserver-ha-190751-m03" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:02.013836  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:02.208873  735111 request.go:632] Waited for 194.922455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-190751
	I0916 13:40:02.208954  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-190751
	I0916 13:40:02.208961  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:02.208972  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:02.208984  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:02.212385  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:02.409470  735111 request.go:632] Waited for 196.297466ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:40:02.409545  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:40:02.409588  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:02.409602  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:02.409612  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:02.412884  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:02.413456  735111 pod_ready.go:93] pod "kube-controller-manager-ha-190751" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:02.413477  735111 pod_ready.go:82] duration metric: took 399.634196ms for pod "kube-controller-manager-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:02.413491  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:02.609618  735111 request.go:632] Waited for 196.019413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-190751-m02
	I0916 13:40:02.609782  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-190751-m02
	I0916 13:40:02.609798  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:02.609809  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:02.609817  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:02.613405  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:02.809452  735111 request.go:632] Waited for 194.909335ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:40:02.809554  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:40:02.809563  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:02.809573  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:02.809583  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:02.813724  735111 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 13:40:02.814447  735111 pod_ready.go:93] pod "kube-controller-manager-ha-190751-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:02.814471  735111 pod_ready.go:82] duration metric: took 400.970352ms for pod "kube-controller-manager-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:02.814482  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-190751-m03" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:03.009529  735111 request.go:632] Waited for 194.967581ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-190751-m03
	I0916 13:40:03.009609  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-190751-m03
	I0916 13:40:03.009621  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:03.009638  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:03.009644  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:03.013202  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:03.209298  735111 request.go:632] Waited for 195.381571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:40:03.209392  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:40:03.209400  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:03.209411  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:03.209420  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:03.212635  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:03.213153  735111 pod_ready.go:93] pod "kube-controller-manager-ha-190751-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:03.213176  735111 pod_ready.go:82] duration metric: took 398.684012ms for pod "kube-controller-manager-ha-190751-m03" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:03.213190  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-24q9n" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:03.409330  735111 request.go:632] Waited for 196.051127ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-proxy-24q9n
	I0916 13:40:03.409437  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-proxy-24q9n
	I0916 13:40:03.409449  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:03.409459  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:03.409467  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:03.412516  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:03.609651  735111 request.go:632] Waited for 196.394591ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:40:03.609742  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:40:03.609749  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:03.609761  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:03.609772  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:03.613665  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:03.614254  735111 pod_ready.go:93] pod "kube-proxy-24q9n" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:03.614281  735111 pod_ready.go:82] duration metric: took 401.084241ms for pod "kube-proxy-24q9n" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:03.614292  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9d7kt" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:03.809285  735111 request.go:632] Waited for 194.919635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9d7kt
	I0916 13:40:03.809367  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9d7kt
	I0916 13:40:03.809383  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:03.809394  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:03.809405  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:03.812801  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:04.008838  735111 request.go:632] Waited for 195.285686ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:40:04.008898  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:40:04.008903  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:04.008911  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:04.008931  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:04.012287  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:04.013023  735111 pod_ready.go:93] pod "kube-proxy-9d7kt" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:04.013043  735111 pod_ready.go:82] duration metric: took 398.743498ms for pod "kube-proxy-9d7kt" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:04.013052  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9lpwl" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:04.209207  735111 request.go:632] Waited for 196.061561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9lpwl
	I0916 13:40:04.209312  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9lpwl
	I0916 13:40:04.209322  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:04.209331  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:04.209340  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:04.213188  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:04.409393  735111 request.go:632] Waited for 195.377416ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:40:04.409499  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:40:04.409516  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:04.409525  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:04.409532  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:04.412966  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:04.413420  735111 pod_ready.go:93] pod "kube-proxy-9lpwl" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:04.413439  735111 pod_ready.go:82] duration metric: took 400.376846ms for pod "kube-proxy-9lpwl" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:04.413448  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:04.609529  735111 request.go:632] Waited for 195.97896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-190751
	I0916 13:40:04.609609  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-190751
	I0916 13:40:04.609618  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:04.609631  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:04.609643  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:04.613259  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:04.809335  735111 request.go:632] Waited for 195.383746ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:40:04.809422  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751
	I0916 13:40:04.809430  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:04.809439  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:04.809458  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:04.812751  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:04.813354  735111 pod_ready.go:93] pod "kube-scheduler-ha-190751" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:04.813380  735111 pod_ready.go:82] duration metric: took 399.924701ms for pod "kube-scheduler-ha-190751" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:04.813393  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:05.009758  735111 request.go:632] Waited for 196.25195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-190751-m02
	I0916 13:40:05.009832  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-190751-m02
	I0916 13:40:05.009839  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:05.009848  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:05.009852  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:05.012798  735111 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 13:40:05.209800  735111 request.go:632] Waited for 196.394637ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:40:05.209899  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m02
	I0916 13:40:05.209911  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:05.209922  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:05.209933  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:05.213079  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:05.213806  735111 pod_ready.go:93] pod "kube-scheduler-ha-190751-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:05.213830  735111 pod_ready.go:82] duration metric: took 400.426093ms for pod "kube-scheduler-ha-190751-m02" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:05.213842  735111 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-190751-m03" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:05.409784  735111 request.go:632] Waited for 195.838547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-190751-m03
	I0916 13:40:05.409860  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-190751-m03
	I0916 13:40:05.409871  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:05.409883  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:05.409894  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:05.413051  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:05.609259  735111 request.go:632] Waited for 195.400698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:40:05.609333  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes/ha-190751-m03
	I0916 13:40:05.609359  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:05.609375  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:05.609382  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:05.612448  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:05.613005  735111 pod_ready.go:93] pod "kube-scheduler-ha-190751-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 13:40:05.613026  735111 pod_ready.go:82] duration metric: took 399.175294ms for pod "kube-scheduler-ha-190751-m03" in "kube-system" namespace to be "Ready" ...
	I0916 13:40:05.613039  735111 pod_ready.go:39] duration metric: took 5.199652226s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 13:40:05.613057  735111 api_server.go:52] waiting for apiserver process to appear ...
	I0916 13:40:05.613111  735111 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 13:40:05.638750  735111 api_server.go:72] duration metric: took 19.543336492s to wait for apiserver process to appear ...
	I0916 13:40:05.638783  735111 api_server.go:88] waiting for apiserver healthz status ...
	I0916 13:40:05.638810  735111 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8443/healthz ...
	I0916 13:40:05.644921  735111 api_server.go:279] https://192.168.39.94:8443/healthz returned 200:
	ok
	I0916 13:40:05.645004  735111 round_trippers.go:463] GET https://192.168.39.94:8443/version
	I0916 13:40:05.645014  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:05.645025  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:05.645033  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:05.645737  735111 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 13:40:05.645818  735111 api_server.go:141] control plane version: v1.31.1
	I0916 13:40:05.645833  735111 api_server.go:131] duration metric: took 7.043412ms to wait for apiserver health ...
	I0916 13:40:05.645841  735111 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 13:40:05.809279  735111 request.go:632] Waited for 163.352733ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods
	I0916 13:40:05.809374  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods
	I0916 13:40:05.809382  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:05.809392  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:05.809398  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:05.815851  735111 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 13:40:05.823541  735111 system_pods.go:59] 24 kube-system pods found
	I0916 13:40:05.823571  735111 system_pods.go:61] "coredns-7c65d6cfc9-9lw8n" [19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9] Running
	I0916 13:40:05.823577  735111 system_pods.go:61] "coredns-7c65d6cfc9-gzkpj" [4e0ada83-1020-4bd4-be70-9a1a5972ff59] Running
	I0916 13:40:05.823581  735111 system_pods.go:61] "etcd-ha-190751" [be88be37-91ce-48e8-9f8b-d3103b49ba3c] Running
	I0916 13:40:05.823585  735111 system_pods.go:61] "etcd-ha-190751-m02" [12d190fd-ee89-4c15-9807-992ea738cbf8] Running
	I0916 13:40:05.823588  735111 system_pods.go:61] "etcd-ha-190751-m03" [8b48a663-3100-4e8e-823e-6768605b14ee] Running
	I0916 13:40:05.823591  735111 system_pods.go:61] "kindnet-gpb96" [bb699362-acf1-471c-8b39-8a7498a7da52] Running
	I0916 13:40:05.823594  735111 system_pods.go:61] "kindnet-qfl9j" [c3185688-2626-48af-9067-60c59d3fc806] Running
	I0916 13:40:05.823597  735111 system_pods.go:61] "kindnet-s7765" [0d614281-1ace-45f4-9f14-a5080a46ce0a] Running
	I0916 13:40:05.823600  735111 system_pods.go:61] "kube-apiserver-ha-190751" [c91fdd4e-99d4-4130-8240-0ae5f9339cd0] Running
	I0916 13:40:05.823603  735111 system_pods.go:61] "kube-apiserver-ha-190751-m02" [bdbe2c9a-88c9-468e-b902-daddcf463dad] Running
	I0916 13:40:05.823608  735111 system_pods.go:61] "kube-apiserver-ha-190751-m03" [6a098e94-9f6a-4b74-bc97-b9549edd3285] Running
	I0916 13:40:05.823611  735111 system_pods.go:61] "kube-controller-manager-ha-190751" [fefa0f76-38b3-4138-8e0a-d9ac18bdbeac] Running
	I0916 13:40:05.823614  735111 system_pods.go:61] "kube-controller-manager-ha-190751-m02" [22abf056-bbbc-4702-aed6-60aa470bc87d] Running
	I0916 13:40:05.823618  735111 system_pods.go:61] "kube-controller-manager-ha-190751-m03" [773d2c17-c182-40a1-b335-b03d6b030d7a] Running
	I0916 13:40:05.823621  735111 system_pods.go:61] "kube-proxy-24q9n" [12db4b5d-002f-4e38-95a1-3b12747c80a3] Running
	I0916 13:40:05.823624  735111 system_pods.go:61] "kube-proxy-9d7kt" [ba8c34d1-5931-4e70-8d01-798817397f78] Running
	I0916 13:40:05.823627  735111 system_pods.go:61] "kube-proxy-9lpwl" [e12b5081-66dd-4aa1-9fc8-ff9aa93e1618] Running
	I0916 13:40:05.823630  735111 system_pods.go:61] "kube-scheduler-ha-190751" [677eae56-307b-4bef-939e-5eae5b8a3fff] Running
	I0916 13:40:05.823634  735111 system_pods.go:61] "kube-scheduler-ha-190751-m02" [9c09f981-ca69-420f-87c7-2a9c6692b9d7] Running
	I0916 13:40:05.823637  735111 system_pods.go:61] "kube-scheduler-ha-190751-m03" [eafd129c-21e3-4841-84d0-81f629684de9] Running
	I0916 13:40:05.823639  735111 system_pods.go:61] "kube-vip-ha-190751" [d979d6e0-d0db-4fe1-a8e7-d8e361f20a88] Running
	I0916 13:40:05.823642  735111 system_pods.go:61] "kube-vip-ha-190751-m02" [1c08285c-dafc-45f7-b1b3-dc86bf623fde] Running
	I0916 13:40:05.823646  735111 system_pods.go:61] "kube-vip-ha-190751-m03" [66c7d0df-b50f-41ad-b9f9-c9a48748390b] Running
	I0916 13:40:05.823651  735111 system_pods.go:61] "storage-provisioner" [f01b81dc-2ff8-41de-8c63-e09a0ead6545] Running
	I0916 13:40:05.823657  735111 system_pods.go:74] duration metric: took 177.8116ms to wait for pod list to return data ...
	I0916 13:40:05.823665  735111 default_sa.go:34] waiting for default service account to be created ...
	I0916 13:40:06.009131  735111 request.go:632] Waited for 185.378336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/default/serviceaccounts
	I0916 13:40:06.009213  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/default/serviceaccounts
	I0916 13:40:06.009223  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:06.009234  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:06.009243  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:06.012758  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:06.012887  735111 default_sa.go:45] found service account: "default"
	I0916 13:40:06.012901  735111 default_sa.go:55] duration metric: took 189.229884ms for default service account to be created ...
	I0916 13:40:06.012909  735111 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 13:40:06.209214  735111 request.go:632] Waited for 196.217871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods
	I0916 13:40:06.209293  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/namespaces/kube-system/pods
	I0916 13:40:06.209310  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:06.209331  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:06.209356  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:06.216560  735111 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0916 13:40:06.223463  735111 system_pods.go:86] 24 kube-system pods found
	I0916 13:40:06.223491  735111 system_pods.go:89] "coredns-7c65d6cfc9-9lw8n" [19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9] Running
	I0916 13:40:06.223497  735111 system_pods.go:89] "coredns-7c65d6cfc9-gzkpj" [4e0ada83-1020-4bd4-be70-9a1a5972ff59] Running
	I0916 13:40:06.223501  735111 system_pods.go:89] "etcd-ha-190751" [be88be37-91ce-48e8-9f8b-d3103b49ba3c] Running
	I0916 13:40:06.223505  735111 system_pods.go:89] "etcd-ha-190751-m02" [12d190fd-ee89-4c15-9807-992ea738cbf8] Running
	I0916 13:40:06.223509  735111 system_pods.go:89] "etcd-ha-190751-m03" [8b48a663-3100-4e8e-823e-6768605b14ee] Running
	I0916 13:40:06.223512  735111 system_pods.go:89] "kindnet-gpb96" [bb699362-acf1-471c-8b39-8a7498a7da52] Running
	I0916 13:40:06.223516  735111 system_pods.go:89] "kindnet-qfl9j" [c3185688-2626-48af-9067-60c59d3fc806] Running
	I0916 13:40:06.223520  735111 system_pods.go:89] "kindnet-s7765" [0d614281-1ace-45f4-9f14-a5080a46ce0a] Running
	I0916 13:40:06.223523  735111 system_pods.go:89] "kube-apiserver-ha-190751" [c91fdd4e-99d4-4130-8240-0ae5f9339cd0] Running
	I0916 13:40:06.223526  735111 system_pods.go:89] "kube-apiserver-ha-190751-m02" [bdbe2c9a-88c9-468e-b902-daddcf463dad] Running
	I0916 13:40:06.223529  735111 system_pods.go:89] "kube-apiserver-ha-190751-m03" [6a098e94-9f6a-4b74-bc97-b9549edd3285] Running
	I0916 13:40:06.223532  735111 system_pods.go:89] "kube-controller-manager-ha-190751" [fefa0f76-38b3-4138-8e0a-d9ac18bdbeac] Running
	I0916 13:40:06.223536  735111 system_pods.go:89] "kube-controller-manager-ha-190751-m02" [22abf056-bbbc-4702-aed6-60aa470bc87d] Running
	I0916 13:40:06.223539  735111 system_pods.go:89] "kube-controller-manager-ha-190751-m03" [773d2c17-c182-40a1-b335-b03d6b030d7a] Running
	I0916 13:40:06.223542  735111 system_pods.go:89] "kube-proxy-24q9n" [12db4b5d-002f-4e38-95a1-3b12747c80a3] Running
	I0916 13:40:06.223545  735111 system_pods.go:89] "kube-proxy-9d7kt" [ba8c34d1-5931-4e70-8d01-798817397f78] Running
	I0916 13:40:06.223548  735111 system_pods.go:89] "kube-proxy-9lpwl" [e12b5081-66dd-4aa1-9fc8-ff9aa93e1618] Running
	I0916 13:40:06.223551  735111 system_pods.go:89] "kube-scheduler-ha-190751" [677eae56-307b-4bef-939e-5eae5b8a3fff] Running
	I0916 13:40:06.223554  735111 system_pods.go:89] "kube-scheduler-ha-190751-m02" [9c09f981-ca69-420f-87c7-2a9c6692b9d7] Running
	I0916 13:40:06.223557  735111 system_pods.go:89] "kube-scheduler-ha-190751-m03" [eafd129c-21e3-4841-84d0-81f629684de9] Running
	I0916 13:40:06.223560  735111 system_pods.go:89] "kube-vip-ha-190751" [d979d6e0-d0db-4fe1-a8e7-d8e361f20a88] Running
	I0916 13:40:06.223564  735111 system_pods.go:89] "kube-vip-ha-190751-m02" [1c08285c-dafc-45f7-b1b3-dc86bf623fde] Running
	I0916 13:40:06.223567  735111 system_pods.go:89] "kube-vip-ha-190751-m03" [66c7d0df-b50f-41ad-b9f9-c9a48748390b] Running
	I0916 13:40:06.223569  735111 system_pods.go:89] "storage-provisioner" [f01b81dc-2ff8-41de-8c63-e09a0ead6545] Running
	I0916 13:40:06.223579  735111 system_pods.go:126] duration metric: took 210.665549ms to wait for k8s-apps to be running ...
	I0916 13:40:06.223589  735111 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 13:40:06.223634  735111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:40:06.239619  735111 system_svc.go:56] duration metric: took 16.018236ms WaitForService to wait for kubelet
	I0916 13:40:06.239654  735111 kubeadm.go:582] duration metric: took 20.144246804s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 13:40:06.239677  735111 node_conditions.go:102] verifying NodePressure condition ...
	I0916 13:40:06.409601  735111 request.go:632] Waited for 169.742083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.94:8443/api/v1/nodes
	I0916 13:40:06.409694  735111 round_trippers.go:463] GET https://192.168.39.94:8443/api/v1/nodes
	I0916 13:40:06.409706  735111 round_trippers.go:469] Request Headers:
	I0916 13:40:06.409775  735111 round_trippers.go:473]     Accept: application/json, */*
	I0916 13:40:06.409792  735111 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 13:40:06.413568  735111 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 13:40:06.414639  735111 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 13:40:06.414663  735111 node_conditions.go:123] node cpu capacity is 2
	I0916 13:40:06.414684  735111 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 13:40:06.414691  735111 node_conditions.go:123] node cpu capacity is 2
	I0916 13:40:06.414698  735111 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 13:40:06.414703  735111 node_conditions.go:123] node cpu capacity is 2
	I0916 13:40:06.414711  735111 node_conditions.go:105] duration metric: took 175.028902ms to run NodePressure ...
	I0916 13:40:06.414729  735111 start.go:241] waiting for startup goroutines ...
	I0916 13:40:06.414759  735111 start.go:255] writing updated cluster config ...
	I0916 13:40:06.415139  735111 ssh_runner.go:195] Run: rm -f paused
	I0916 13:40:06.465132  735111 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0916 13:40:06.467878  735111 out.go:177] * Done! kubectl is now configured to use "ha-190751" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 16 13:44:35 ha-190751 crio[667]: time="2024-09-16 13:44:35.901230537Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494275901209612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ad0a10eb-149b-40ed-834f-84b84e41cc6c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:44:35 ha-190751 crio[667]: time="2024-09-16 13:44:35.902371760Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd7d5560-f199-44b4-b5c0-23e0e919822b name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:44:35 ha-190751 crio[667]: time="2024-09-16 13:44:35.902428433Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd7d5560-f199-44b4-b5c0-23e0e919822b name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:44:35 ha-190751 crio[667]: time="2024-09-16 13:44:35.902640509Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1ff16b4cf488d896605be284a1159f722aa4cc147bb74a8eeaf47bee3912ead0,PodSandboxId:70804a075dc34bfcfcd945e41bc9b9b50887dfbed8832df3453a49df237f3a10,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726494009959572833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lsqcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa0c38d7-fa7a-4b02-b417-1da8e210cc78,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5597ff6fa9128f07d2dc3f058b9b448395d0989aa657629ef5c6819b33cc8cb7,PodSandboxId:faf5324ae84ec325360c692d7e663f4a36e234c8403a4e72f80d57211acd5a2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726493905850761514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9lw8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e33b03d2f6fce87730d338d716b579f61fa7dca1205bac35abaf88257659f781,PodSandboxId:d74b47a92fc73e9c9e0646cddd475b1d9c4c084abec46863815d97b0f05bd238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726493905853240042,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gzkpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4e0ada83-1020-4bd4-be70-9a1a5972ff59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85e2956fe35237a31eb3777a4db47ef14cfd27c1fa6b47b8e68d421b6f0388b0,PodSandboxId:a8d65f7a2c445bbd65845feaa6d44e7a6803741ece5c02dc2af29bc92b856eda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726493904181656934,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f01b81dc-2ff8-41de-8c63-e09a0ead6545,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2fb4efd07b928023ce922b08d4d29585e3080441cdb212649ac1338243874ee,PodSandboxId:e227eb76eed28456da60c41632338b32cbb3ec7c34407c7745860a265438ce7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172649386
3271598131,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9d7kt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba8c34d1-5931-4e70-8d01-798817397f78,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c9f45c384802a996dd22d917975d86b875cbde33520b6bfb8ec6f84b39629,PodSandboxId:06c5005bbb7151b021f0bc1b7f3e8818b673f7067ec8acf264d4919832abfb8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726493862131295974,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gpb96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb699362-acf1-471c-8b39-8a7498a7da52,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce48d6fe2a10977168e6aa4159b5fa451fbf190ee313d8d6500cf399312b4061,PodSandboxId:6ef66800e15f664d46f5fea0bf074e6d8f215f27a6826e5c7c3ce86e05c27ec2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726493852347558523,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96ae23ac25e2d9f21a57c25091692659,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd93f6d25b96fcafeadbe4368203439d003e6e60832e2405318039bac48cd90,PodSandboxId:235857e1be3ea44c435d98b63c4e4bf947b816eb9121b4867264d82144ce5cc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726493850593194695,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2cc73ce1a8f746d45b3276bee469d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13c8d0e1fdcbee87a87cace216d5dc79bc82e8045e7d582390ca41efdbcadcad,PodSandboxId:a61ae034ef53d2ad3541baf2947573411c903bab5f21e57550892cd37fb14c67,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726493850589583740,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae495349ac02bb4b5addcdcea0d25715,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb375fdf3e21c70ce4d6d7afaeb7e323643bddc06490de3e9e9973f9817f85b,PodSandboxId:42b1cda382f84b5d55beb45c086e32038dc725eb83913efd7ede62eb7011958a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726493850576490172,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a42ea5903905c847366e72d48200db,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2fdc916e364191824e8eeeeebd2bd4bde311ec642553730ff1fa83d5ae6b3c,PodSandboxId:2b68d5be2f2cfa03aea5cc5c13039a8c244e9a8260f12dd48010acb6164d6332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726493850404260188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-190751,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9922f803bd7b5d0ba2ffa0c06886b9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dd7d5560-f199-44b4-b5c0-23e0e919822b name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:44:35 ha-190751 crio[667]: time="2024-09-16 13:44:35.937008501Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=82f9fe93-0d30-4d02-8174-ab93680b1c72 name=/runtime.v1.RuntimeService/Version
	Sep 16 13:44:35 ha-190751 crio[667]: time="2024-09-16 13:44:35.937111075Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=82f9fe93-0d30-4d02-8174-ab93680b1c72 name=/runtime.v1.RuntimeService/Version
	Sep 16 13:44:35 ha-190751 crio[667]: time="2024-09-16 13:44:35.938318677Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=67b9c6b9-faf3-4221-94b9-96dcca7b8102 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:44:35 ha-190751 crio[667]: time="2024-09-16 13:44:35.938716874Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494275938697073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=67b9c6b9-faf3-4221-94b9-96dcca7b8102 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:44:35 ha-190751 crio[667]: time="2024-09-16 13:44:35.939283213Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a150415b-6b51-434b-8fa2-49205a1792ad name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:44:35 ha-190751 crio[667]: time="2024-09-16 13:44:35.939396575Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a150415b-6b51-434b-8fa2-49205a1792ad name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:44:35 ha-190751 crio[667]: time="2024-09-16 13:44:35.939636495Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1ff16b4cf488d896605be284a1159f722aa4cc147bb74a8eeaf47bee3912ead0,PodSandboxId:70804a075dc34bfcfcd945e41bc9b9b50887dfbed8832df3453a49df237f3a10,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726494009959572833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lsqcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa0c38d7-fa7a-4b02-b417-1da8e210cc78,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5597ff6fa9128f07d2dc3f058b9b448395d0989aa657629ef5c6819b33cc8cb7,PodSandboxId:faf5324ae84ec325360c692d7e663f4a36e234c8403a4e72f80d57211acd5a2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726493905850761514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9lw8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e33b03d2f6fce87730d338d716b579f61fa7dca1205bac35abaf88257659f781,PodSandboxId:d74b47a92fc73e9c9e0646cddd475b1d9c4c084abec46863815d97b0f05bd238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726493905853240042,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gzkpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4e0ada83-1020-4bd4-be70-9a1a5972ff59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85e2956fe35237a31eb3777a4db47ef14cfd27c1fa6b47b8e68d421b6f0388b0,PodSandboxId:a8d65f7a2c445bbd65845feaa6d44e7a6803741ece5c02dc2af29bc92b856eda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726493904181656934,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f01b81dc-2ff8-41de-8c63-e09a0ead6545,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2fb4efd07b928023ce922b08d4d29585e3080441cdb212649ac1338243874ee,PodSandboxId:e227eb76eed28456da60c41632338b32cbb3ec7c34407c7745860a265438ce7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172649386
3271598131,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9d7kt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba8c34d1-5931-4e70-8d01-798817397f78,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c9f45c384802a996dd22d917975d86b875cbde33520b6bfb8ec6f84b39629,PodSandboxId:06c5005bbb7151b021f0bc1b7f3e8818b673f7067ec8acf264d4919832abfb8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726493862131295974,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gpb96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb699362-acf1-471c-8b39-8a7498a7da52,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce48d6fe2a10977168e6aa4159b5fa451fbf190ee313d8d6500cf399312b4061,PodSandboxId:6ef66800e15f664d46f5fea0bf074e6d8f215f27a6826e5c7c3ce86e05c27ec2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726493852347558523,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96ae23ac25e2d9f21a57c25091692659,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd93f6d25b96fcafeadbe4368203439d003e6e60832e2405318039bac48cd90,PodSandboxId:235857e1be3ea44c435d98b63c4e4bf947b816eb9121b4867264d82144ce5cc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726493850593194695,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2cc73ce1a8f746d45b3276bee469d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13c8d0e1fdcbee87a87cace216d5dc79bc82e8045e7d582390ca41efdbcadcad,PodSandboxId:a61ae034ef53d2ad3541baf2947573411c903bab5f21e57550892cd37fb14c67,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726493850589583740,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae495349ac02bb4b5addcdcea0d25715,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb375fdf3e21c70ce4d6d7afaeb7e323643bddc06490de3e9e9973f9817f85b,PodSandboxId:42b1cda382f84b5d55beb45c086e32038dc725eb83913efd7ede62eb7011958a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726493850576490172,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a42ea5903905c847366e72d48200db,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2fdc916e364191824e8eeeeebd2bd4bde311ec642553730ff1fa83d5ae6b3c,PodSandboxId:2b68d5be2f2cfa03aea5cc5c13039a8c244e9a8260f12dd48010acb6164d6332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726493850404260188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-190751,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9922f803bd7b5d0ba2ffa0c06886b9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a150415b-6b51-434b-8fa2-49205a1792ad name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:44:35 ha-190751 crio[667]: time="2024-09-16 13:44:35.975501742Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a82c6173-98d6-48c7-a5cb-0d634b05bd2e name=/runtime.v1.RuntimeService/Version
	Sep 16 13:44:35 ha-190751 crio[667]: time="2024-09-16 13:44:35.975602243Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a82c6173-98d6-48c7-a5cb-0d634b05bd2e name=/runtime.v1.RuntimeService/Version
	Sep 16 13:44:35 ha-190751 crio[667]: time="2024-09-16 13:44:35.976587419Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a21afc2b-59ff-4420-8700-71cd83fea893 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:44:35 ha-190751 crio[667]: time="2024-09-16 13:44:35.977061740Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494275977040368,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a21afc2b-59ff-4420-8700-71cd83fea893 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:44:35 ha-190751 crio[667]: time="2024-09-16 13:44:35.977595517Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b58f209f-be10-40e7-b2a8-e9a3150eee5f name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:44:35 ha-190751 crio[667]: time="2024-09-16 13:44:35.977647208Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b58f209f-be10-40e7-b2a8-e9a3150eee5f name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:44:35 ha-190751 crio[667]: time="2024-09-16 13:44:35.977910303Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1ff16b4cf488d896605be284a1159f722aa4cc147bb74a8eeaf47bee3912ead0,PodSandboxId:70804a075dc34bfcfcd945e41bc9b9b50887dfbed8832df3453a49df237f3a10,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726494009959572833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lsqcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa0c38d7-fa7a-4b02-b417-1da8e210cc78,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5597ff6fa9128f07d2dc3f058b9b448395d0989aa657629ef5c6819b33cc8cb7,PodSandboxId:faf5324ae84ec325360c692d7e663f4a36e234c8403a4e72f80d57211acd5a2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726493905850761514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9lw8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e33b03d2f6fce87730d338d716b579f61fa7dca1205bac35abaf88257659f781,PodSandboxId:d74b47a92fc73e9c9e0646cddd475b1d9c4c084abec46863815d97b0f05bd238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726493905853240042,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gzkpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4e0ada83-1020-4bd4-be70-9a1a5972ff59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85e2956fe35237a31eb3777a4db47ef14cfd27c1fa6b47b8e68d421b6f0388b0,PodSandboxId:a8d65f7a2c445bbd65845feaa6d44e7a6803741ece5c02dc2af29bc92b856eda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726493904181656934,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f01b81dc-2ff8-41de-8c63-e09a0ead6545,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2fb4efd07b928023ce922b08d4d29585e3080441cdb212649ac1338243874ee,PodSandboxId:e227eb76eed28456da60c41632338b32cbb3ec7c34407c7745860a265438ce7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172649386
3271598131,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9d7kt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba8c34d1-5931-4e70-8d01-798817397f78,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c9f45c384802a996dd22d917975d86b875cbde33520b6bfb8ec6f84b39629,PodSandboxId:06c5005bbb7151b021f0bc1b7f3e8818b673f7067ec8acf264d4919832abfb8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726493862131295974,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gpb96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb699362-acf1-471c-8b39-8a7498a7da52,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce48d6fe2a10977168e6aa4159b5fa451fbf190ee313d8d6500cf399312b4061,PodSandboxId:6ef66800e15f664d46f5fea0bf074e6d8f215f27a6826e5c7c3ce86e05c27ec2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726493852347558523,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96ae23ac25e2d9f21a57c25091692659,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd93f6d25b96fcafeadbe4368203439d003e6e60832e2405318039bac48cd90,PodSandboxId:235857e1be3ea44c435d98b63c4e4bf947b816eb9121b4867264d82144ce5cc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726493850593194695,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2cc73ce1a8f746d45b3276bee469d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13c8d0e1fdcbee87a87cace216d5dc79bc82e8045e7d582390ca41efdbcadcad,PodSandboxId:a61ae034ef53d2ad3541baf2947573411c903bab5f21e57550892cd37fb14c67,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726493850589583740,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae495349ac02bb4b5addcdcea0d25715,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb375fdf3e21c70ce4d6d7afaeb7e323643bddc06490de3e9e9973f9817f85b,PodSandboxId:42b1cda382f84b5d55beb45c086e32038dc725eb83913efd7ede62eb7011958a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726493850576490172,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a42ea5903905c847366e72d48200db,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2fdc916e364191824e8eeeeebd2bd4bde311ec642553730ff1fa83d5ae6b3c,PodSandboxId:2b68d5be2f2cfa03aea5cc5c13039a8c244e9a8260f12dd48010acb6164d6332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726493850404260188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-190751,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9922f803bd7b5d0ba2ffa0c06886b9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b58f209f-be10-40e7-b2a8-e9a3150eee5f name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:44:36 ha-190751 crio[667]: time="2024-09-16 13:44:36.012795811Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e1161ae-6c6e-4252-8630-74d6249ae5bb name=/runtime.v1.RuntimeService/Version
	Sep 16 13:44:36 ha-190751 crio[667]: time="2024-09-16 13:44:36.013039933Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e1161ae-6c6e-4252-8630-74d6249ae5bb name=/runtime.v1.RuntimeService/Version
	Sep 16 13:44:36 ha-190751 crio[667]: time="2024-09-16 13:44:36.013784381Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e1130d4d-b6a9-4a33-a41f-1eb6ccdf14c8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:44:36 ha-190751 crio[667]: time="2024-09-16 13:44:36.014282332Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494276014263498,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1130d4d-b6a9-4a33-a41f-1eb6ccdf14c8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:44:36 ha-190751 crio[667]: time="2024-09-16 13:44:36.014760893Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0dad623f-d6bc-442a-b3a7-e1303f325609 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:44:36 ha-190751 crio[667]: time="2024-09-16 13:44:36.014808633Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0dad623f-d6bc-442a-b3a7-e1303f325609 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:44:36 ha-190751 crio[667]: time="2024-09-16 13:44:36.015134436Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1ff16b4cf488d896605be284a1159f722aa4cc147bb74a8eeaf47bee3912ead0,PodSandboxId:70804a075dc34bfcfcd945e41bc9b9b50887dfbed8832df3453a49df237f3a10,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726494009959572833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lsqcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa0c38d7-fa7a-4b02-b417-1da8e210cc78,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5597ff6fa9128f07d2dc3f058b9b448395d0989aa657629ef5c6819b33cc8cb7,PodSandboxId:faf5324ae84ec325360c692d7e663f4a36e234c8403a4e72f80d57211acd5a2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726493905850761514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9lw8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e33b03d2f6fce87730d338d716b579f61fa7dca1205bac35abaf88257659f781,PodSandboxId:d74b47a92fc73e9c9e0646cddd475b1d9c4c084abec46863815d97b0f05bd238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726493905853240042,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gzkpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4e0ada83-1020-4bd4-be70-9a1a5972ff59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85e2956fe35237a31eb3777a4db47ef14cfd27c1fa6b47b8e68d421b6f0388b0,PodSandboxId:a8d65f7a2c445bbd65845feaa6d44e7a6803741ece5c02dc2af29bc92b856eda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726493904181656934,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f01b81dc-2ff8-41de-8c63-e09a0ead6545,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2fb4efd07b928023ce922b08d4d29585e3080441cdb212649ac1338243874ee,PodSandboxId:e227eb76eed28456da60c41632338b32cbb3ec7c34407c7745860a265438ce7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172649386
3271598131,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9d7kt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba8c34d1-5931-4e70-8d01-798817397f78,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c9f45c384802a996dd22d917975d86b875cbde33520b6bfb8ec6f84b39629,PodSandboxId:06c5005bbb7151b021f0bc1b7f3e8818b673f7067ec8acf264d4919832abfb8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726493862131295974,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gpb96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb699362-acf1-471c-8b39-8a7498a7da52,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce48d6fe2a10977168e6aa4159b5fa451fbf190ee313d8d6500cf399312b4061,PodSandboxId:6ef66800e15f664d46f5fea0bf074e6d8f215f27a6826e5c7c3ce86e05c27ec2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726493852347558523,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96ae23ac25e2d9f21a57c25091692659,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd93f6d25b96fcafeadbe4368203439d003e6e60832e2405318039bac48cd90,PodSandboxId:235857e1be3ea44c435d98b63c4e4bf947b816eb9121b4867264d82144ce5cc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726493850593194695,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2cc73ce1a8f746d45b3276bee469d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13c8d0e1fdcbee87a87cace216d5dc79bc82e8045e7d582390ca41efdbcadcad,PodSandboxId:a61ae034ef53d2ad3541baf2947573411c903bab5f21e57550892cd37fb14c67,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726493850589583740,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae495349ac02bb4b5addcdcea0d25715,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb375fdf3e21c70ce4d6d7afaeb7e323643bddc06490de3e9e9973f9817f85b,PodSandboxId:42b1cda382f84b5d55beb45c086e32038dc725eb83913efd7ede62eb7011958a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726493850576490172,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a42ea5903905c847366e72d48200db,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2fdc916e364191824e8eeeeebd2bd4bde311ec642553730ff1fa83d5ae6b3c,PodSandboxId:2b68d5be2f2cfa03aea5cc5c13039a8c244e9a8260f12dd48010acb6164d6332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726493850404260188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-190751,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9922f803bd7b5d0ba2ffa0c06886b9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0dad623f-d6bc-442a-b3a7-e1303f325609 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1ff16b4cf488d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   70804a075dc34       busybox-7dff88458-lsqcp
	e33b03d2f6fce       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   d74b47a92fc73       coredns-7c65d6cfc9-gzkpj
	5597ff6fa9128       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   faf5324ae84ec       coredns-7c65d6cfc9-9lw8n
	85e2956fe3523       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   a8d65f7a2c445       storage-provisioner
	d2fb4efd07b92       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   e227eb76eed28       kube-proxy-9d7kt
	876c9f45c3848       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   06c5005bbb715       kindnet-gpb96
	ce48d6fe2a109       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   6ef66800e15f6       kube-vip-ha-190751
	0cd93f6d25b96       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   235857e1be3ea       etcd-ha-190751
	13c8d0e1fdcbe       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      7 minutes ago       Running             kube-controller-manager   0                   a61ae034ef53d       kube-controller-manager-ha-190751
	2cb375fdf3e21       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      7 minutes ago       Running             kube-apiserver            0                   42b1cda382f84       kube-apiserver-ha-190751
	3d2fdc916e364       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      7 minutes ago       Running             kube-scheduler            0                   2b68d5be2f2cf       kube-scheduler-ha-190751
	
	
	==> coredns [5597ff6fa9128f07d2dc3f058b9b448395d0989aa657629ef5c6819b33cc8cb7] <==
	[INFO] 10.244.0.4:44564 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000175716s
	[INFO] 10.244.2.2:52543 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136907s
	[INFO] 10.244.2.2:35351 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001547835s
	[INFO] 10.244.2.2:39675 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165265s
	[INFO] 10.244.2.2:37048 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001066948s
	[INFO] 10.244.2.2:56795 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069535s
	[INFO] 10.244.1.2:57890 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135841s
	[INFO] 10.244.1.2:47650 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001636029s
	[INFO] 10.244.1.2:50206 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099676s
	[INFO] 10.244.1.2:55092 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109421s
	[INFO] 10.244.0.4:53870 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097861s
	[INFO] 10.244.0.4:42443 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000049844s
	[INFO] 10.244.0.4:52687 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057203s
	[INFO] 10.244.2.2:34837 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122205s
	[INFO] 10.244.2.2:39661 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123335s
	[INFO] 10.244.2.2:52074 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080782s
	[INFO] 10.244.1.2:41492 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098139s
	[INFO] 10.244.1.2:49674 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088502s
	[INFO] 10.244.0.4:53518 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000259854s
	[INFO] 10.244.0.4:41118 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000155352s
	[INFO] 10.244.0.4:33823 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000119363s
	[INFO] 10.244.2.2:44582 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000180459s
	[INFO] 10.244.2.2:52118 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000196503s
	[INFO] 10.244.1.2:43708 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011298s
	[INFO] 10.244.1.2:42623 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00011952s
	
	
	==> coredns [e33b03d2f6fce87730d338d716b579f61fa7dca1205bac35abaf88257659f781] <==
	[INFO] 10.244.2.2:59563 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000156884s
	[INFO] 10.244.1.2:58517 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.00014134s
	[INFO] 10.244.1.2:36244 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000672675s
	[INFO] 10.244.1.2:37179 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001780819s
	[INFO] 10.244.0.4:50469 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000268768s
	[INFO] 10.244.0.4:48039 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000163904s
	[INFO] 10.244.0.4:34482 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084666s
	[INFO] 10.244.0.4:39892 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003221704s
	[INFO] 10.244.0.4:58788 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139358s
	[INFO] 10.244.2.2:57520 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099764s
	[INFO] 10.244.2.2:33023 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000142913s
	[INFO] 10.244.2.2:46886 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000071348s
	[INFO] 10.244.1.2:48181 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120675s
	[INFO] 10.244.1.2:46254 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007984s
	[INFO] 10.244.1.2:51236 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001105782s
	[INFO] 10.244.1.2:43880 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069986s
	[INFO] 10.244.0.4:51480 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109815s
	[INFO] 10.244.2.2:33439 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156091s
	[INFO] 10.244.1.2:40338 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000202214s
	[INFO] 10.244.1.2:41511 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135597s
	[INFO] 10.244.0.4:57318 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142285s
	[INFO] 10.244.2.2:51122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159294s
	[INFO] 10.244.2.2:45477 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00016112s
	[INFO] 10.244.1.2:53140 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015857s
	[INFO] 10.244.1.2:56526 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000182857s
	
	
	==> describe nodes <==
	Name:               ha-190751
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-190751
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86
	                    minikube.k8s.io/name=ha-190751
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T13_37_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 13:37:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-190751
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 13:44:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 13:40:42 +0000   Mon, 16 Sep 2024 13:37:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 13:40:42 +0000   Mon, 16 Sep 2024 13:37:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 13:40:42 +0000   Mon, 16 Sep 2024 13:37:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 13:40:42 +0000   Mon, 16 Sep 2024 13:38:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.94
	  Hostname:    ha-190751
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 413212b342c542b3a63285d76f88cc9f
	  System UUID:                413212b3-42c5-42b3-a632-85d76f88cc9f
	  Boot ID:                    757a1925-23d7-4d65-93ec-732a8b69642f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lsqcp              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 coredns-7c65d6cfc9-9lw8n             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m55s
	  kube-system                 coredns-7c65d6cfc9-gzkpj             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m55s
	  kube-system                 etcd-ha-190751                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m58s
	  kube-system                 kindnet-gpb96                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m55s
	  kube-system                 kube-apiserver-ha-190751             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m57s
	  kube-system                 kube-controller-manager-ha-190751    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m57s
	  kube-system                 kube-proxy-9d7kt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m55s
	  kube-system                 kube-scheduler-ha-190751             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m57s
	  kube-system                 kube-vip-ha-190751                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m57s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m52s  kube-proxy       
	  Normal  Starting                 6m57s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m57s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m57s  kubelet          Node ha-190751 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m57s  kubelet          Node ha-190751 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m57s  kubelet          Node ha-190751 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m56s  node-controller  Node ha-190751 event: Registered Node ha-190751 in Controller
	  Normal  NodeReady                6m13s  kubelet          Node ha-190751 status is now: NodeReady
	  Normal  RegisteredNode           5m58s  node-controller  Node ha-190751 event: Registered Node ha-190751 in Controller
	  Normal  RegisteredNode           4m45s  node-controller  Node ha-190751 event: Registered Node ha-190751 in Controller
	
	
	Name:               ha-190751-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-190751-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86
	                    minikube.k8s.io/name=ha-190751
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T13_38_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 13:38:29 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-190751-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 13:41:22 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 16 Sep 2024 13:40:31 +0000   Mon, 16 Sep 2024 13:42:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 16 Sep 2024 13:40:31 +0000   Mon, 16 Sep 2024 13:42:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 16 Sep 2024 13:40:31 +0000   Mon, 16 Sep 2024 13:42:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 16 Sep 2024 13:40:31 +0000   Mon, 16 Sep 2024 13:42:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.192
	  Hostname:    ha-190751-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 550acf86555f4901ac21dc9dc8bbc28f
	  System UUID:                550acf86-555f-4901-ac21-dc9dc8bbc28f
	  Boot ID:                    fb4d2fc9-b82a-43f9-90cb-6b91307d8d37
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wnt5k                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 etcd-ha-190751-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m5s
	  kube-system                 kindnet-qfl9j                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m7s
	  kube-system                 kube-apiserver-ha-190751-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m5s
	  kube-system                 kube-controller-manager-ha-190751-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 kube-proxy-24q9n                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-scheduler-ha-190751-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 kube-vip-ha-190751-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 6m2s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  6m7s (x8 over 6m7s)  kubelet          Node ha-190751-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m7s (x8 over 6m7s)  kubelet          Node ha-190751-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m7s (x7 over 6m7s)  kubelet          Node ha-190751-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m6s                 node-controller  Node ha-190751-m02 event: Registered Node ha-190751-m02 in Controller
	  Normal  RegisteredNode           5m58s                node-controller  Node ha-190751-m02 event: Registered Node ha-190751-m02 in Controller
	  Normal  RegisteredNode           4m45s                node-controller  Node ha-190751-m02 event: Registered Node ha-190751-m02 in Controller
	  Normal  NodeNotReady             2m31s                node-controller  Node ha-190751-m02 status is now: NodeNotReady
	
	
	Name:               ha-190751-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-190751-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86
	                    minikube.k8s.io/name=ha-190751
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T13_39_45_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 13:39:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-190751-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 13:44:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 13:40:12 +0000   Mon, 16 Sep 2024 13:39:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 13:40:12 +0000   Mon, 16 Sep 2024 13:39:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 13:40:12 +0000   Mon, 16 Sep 2024 13:39:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 13:40:12 +0000   Mon, 16 Sep 2024 13:40:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.134
	  Hostname:    ha-190751-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a371c754a93a41bd8e51ba43403aed52
	  System UUID:                a371c754-a93a-41bd-8e51-ba43403aed52
	  Boot ID:                    1fe05264-4a42-4111-91d4-db1d24d6b79c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-w6sc6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 etcd-ha-190751-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m52s
	  kube-system                 kindnet-s7765                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m54s
	  kube-system                 kube-apiserver-ha-190751-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 kube-controller-manager-ha-190751-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 kube-proxy-9lpwl                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 kube-scheduler-ha-190751-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 kube-vip-ha-190751-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m47s                  kube-proxy       
	  Normal  Starting                 4m54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m54s (x8 over 4m54s)  kubelet          Node ha-190751-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m54s (x8 over 4m54s)  kubelet          Node ha-190751-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m54s (x7 over 4m54s)  kubelet          Node ha-190751-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m53s                  node-controller  Node ha-190751-m03 event: Registered Node ha-190751-m03 in Controller
	  Normal  RegisteredNode           4m51s                  node-controller  Node ha-190751-m03 event: Registered Node ha-190751-m03 in Controller
	  Normal  RegisteredNode           4m45s                  node-controller  Node ha-190751-m03 event: Registered Node ha-190751-m03 in Controller
	
	
	Name:               ha-190751-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-190751-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86
	                    minikube.k8s.io/name=ha-190751
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T13_40_46_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 13:40:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-190751-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 13:44:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 13:41:16 +0000   Mon, 16 Sep 2024 13:40:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 13:41:16 +0000   Mon, 16 Sep 2024 13:40:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 13:41:16 +0000   Mon, 16 Sep 2024 13:40:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 13:41:16 +0000   Mon, 16 Sep 2024 13:41:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-190751-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 99332c0e26304b3097b2fce26060f009
	  System UUID:                99332c0e-2630-4b30-97b2-fce26060f009
	  Boot ID:                    64cf2850-6571-40c7-816a-9ba47cc07e90
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-9nmfv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m50s
	  kube-system                 kube-proxy-tk6f6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m45s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  3m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m50s (x2 over 3m51s)  kubelet          Node ha-190751-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m50s (x2 over 3m51s)  kubelet          Node ha-190751-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m50s (x2 over 3m51s)  kubelet          Node ha-190751-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m49s                  node-controller  Node ha-190751-m04 event: Registered Node ha-190751-m04 in Controller
	  Normal  RegisteredNode           3m48s                  node-controller  Node ha-190751-m04 event: Registered Node ha-190751-m04 in Controller
	  Normal  RegisteredNode           3m46s                  node-controller  Node ha-190751-m04 event: Registered Node ha-190751-m04 in Controller
	  Normal  NodeReady                3m32s                  kubelet          Node ha-190751-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep16 13:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050855] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039668] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.744321] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.392336] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.569791] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.291459] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.062528] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065864] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.157574] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.135658] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.243263] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.876209] systemd-fstab-generator[753]: Ignoring "noauto" option for root device
	[  +4.159219] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.061484] kauditd_printk_skb: 158 callbacks suppressed
	[ +10.191933] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.087738] kauditd_printk_skb: 79 callbacks suppressed
	[Sep16 13:38] kauditd_printk_skb: 69 callbacks suppressed
	[ +12.548550] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [0cd93f6d25b96fcafeadbe4368203439d003e6e60832e2405318039bac48cd90] <==
	{"level":"warn","ts":"2024-09-16T13:44:36.199018Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:44:36.229125Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:44:36.275684Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:44:36.281620Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:44:36.285692Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:44:36.295008Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:44:36.299112Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:44:36.300912Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:44:36.306882Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:44:36.310079Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:44:36.312484Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:44:36.317272Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:44:36.322793Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:44:36.328526Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:44:36.332658Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:44:36.335355Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:44:36.343039Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:44:36.348915Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:44:36.355407Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:44:36.359400Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:44:36.362562Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:44:36.366605Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:44:36.373106Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:44:36.379237Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T13:44:36.398970Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c23cd90330b5fc4f","from":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 13:44:36 up 7 min,  0 users,  load average: 0.56, 0.39, 0.21
	Linux ha-190751 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [876c9f45c384802a996dd22d917975d86b875cbde33520b6bfb8ec6f84b39629] <==
	I0916 13:44:03.336756       1 main.go:322] Node ha-190751-m04 has CIDR [10.244.3.0/24] 
	I0916 13:44:13.334012       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0916 13:44:13.334165       1 main.go:299] handling current node
	I0916 13:44:13.334200       1 main.go:295] Handling node with IPs: map[192.168.39.192:{}]
	I0916 13:44:13.334218       1 main.go:322] Node ha-190751-m02 has CIDR [10.244.1.0/24] 
	I0916 13:44:13.334400       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0916 13:44:13.334422       1 main.go:322] Node ha-190751-m03 has CIDR [10.244.2.0/24] 
	I0916 13:44:13.334479       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0916 13:44:13.334496       1 main.go:322] Node ha-190751-m04 has CIDR [10.244.3.0/24] 
	I0916 13:44:23.330326       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0916 13:44:23.330389       1 main.go:299] handling current node
	I0916 13:44:23.330403       1 main.go:295] Handling node with IPs: map[192.168.39.192:{}]
	I0916 13:44:23.330408       1 main.go:322] Node ha-190751-m02 has CIDR [10.244.1.0/24] 
	I0916 13:44:23.330529       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0916 13:44:23.330535       1 main.go:322] Node ha-190751-m03 has CIDR [10.244.2.0/24] 
	I0916 13:44:23.330587       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0916 13:44:23.330591       1 main.go:322] Node ha-190751-m04 has CIDR [10.244.3.0/24] 
	I0916 13:44:33.332170       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0916 13:44:33.332297       1 main.go:299] handling current node
	I0916 13:44:33.332332       1 main.go:295] Handling node with IPs: map[192.168.39.192:{}]
	I0916 13:44:33.332353       1 main.go:322] Node ha-190751-m02 has CIDR [10.244.1.0/24] 
	I0916 13:44:33.332488       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0916 13:44:33.332523       1 main.go:322] Node ha-190751-m03 has CIDR [10.244.2.0/24] 
	I0916 13:44:33.332601       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0916 13:44:33.332621       1 main.go:322] Node ha-190751-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2cb375fdf3e21c70ce4d6d7afaeb7e323643bddc06490de3e9e9973f9817f85b] <==
	W0916 13:37:35.613917       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.94]
	I0916 13:37:35.615132       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 13:37:35.624415       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 13:37:35.827166       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 13:37:39.689217       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 13:37:39.701776       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 13:37:39.710054       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 13:37:41.127261       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 13:37:41.327290       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0916 13:40:11.257358       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39950: use of closed network connection
	E0916 13:40:11.454111       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39978: use of closed network connection
	E0916 13:40:11.636499       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39984: use of closed network connection
	E0916 13:40:11.840283       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39996: use of closed network connection
	E0916 13:40:12.021189       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40006: use of closed network connection
	E0916 13:40:12.205489       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40024: use of closed network connection
	E0916 13:40:12.384741       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40054: use of closed network connection
	E0916 13:40:12.574388       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40072: use of closed network connection
	E0916 13:40:12.749001       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37174: use of closed network connection
	E0916 13:40:13.066042       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37194: use of closed network connection
	E0916 13:40:13.245757       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37222: use of closed network connection
	E0916 13:40:13.436949       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37248: use of closed network connection
	E0916 13:40:13.616454       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37268: use of closed network connection
	E0916 13:40:13.824166       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37286: use of closed network connection
	E0916 13:40:14.008342       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37304: use of closed network connection
	W0916 13:41:35.625317       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.134 192.168.39.94]
	
	
	==> kube-controller-manager [13c8d0e1fdcbee87a87cace216d5dc79bc82e8045e7d582390ca41efdbcadcad] <==
	I0916 13:40:46.061918       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-190751-m04" podCIDRs=["10.244.3.0/24"]
	I0916 13:40:46.063041       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:40:46.063224       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:40:46.072623       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:40:46.321470       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:40:46.702227       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:40:47.082576       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:40:48.649251       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:40:48.708573       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:40:50.920633       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-190751-m04"
	I0916 13:40:50.921471       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:40:50.942281       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:40:56.091517       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:41:04.673759       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-190751-m04"
	I0916 13:41:04.674036       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:41:04.689464       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:41:05.935484       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:41:16.221567       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:42:05.964385       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m02"
	I0916 13:42:05.964586       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-190751-m04"
	I0916 13:42:05.985678       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m02"
	I0916 13:42:06.119405       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="66.494924ms"
	I0916 13:42:06.120105       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="89.05µs"
	I0916 13:42:07.101541       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m02"
	I0916 13:42:11.166665       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m02"
	
	
	==> kube-proxy [d2fb4efd07b928023ce922b08d4d29585e3080441cdb212649ac1338243874ee] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 13:37:43.541653       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 13:37:43.561090       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.94"]
	E0916 13:37:43.561216       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 13:37:43.596546       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 13:37:43.596577       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 13:37:43.596598       1 server_linux.go:169] "Using iptables Proxier"
	I0916 13:37:43.600422       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 13:37:43.600713       1 server.go:483] "Version info" version="v1.31.1"
	I0916 13:37:43.600739       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 13:37:43.602773       1 config.go:199] "Starting service config controller"
	I0916 13:37:43.603076       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 13:37:43.603330       1 config.go:105] "Starting endpoint slice config controller"
	I0916 13:37:43.603354       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 13:37:43.604127       1 config.go:328] "Starting node config controller"
	I0916 13:37:43.604167       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 13:37:43.703958       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 13:37:43.704048       1 shared_informer.go:320] Caches are synced for service config
	I0916 13:37:43.707176       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3d2fdc916e364191824e8eeeeebd2bd4bde311ec642553730ff1fa83d5ae6b3c] <==
	W0916 13:37:35.187363       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 13:37:35.187418       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 13:37:35.189451       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 13:37:35.189536       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 13:37:35.192628       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 13:37:35.192665       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 13:37:35.197996       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 13:37:35.198037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 13:37:35.202047       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 13:37:35.202088       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 13:37:35.205639       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 13:37:35.205680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 13:37:35.218014       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 13:37:35.218057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 13:37:35.232785       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 13:37:35.232941       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 13:37:36.647896       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 13:40:46.111447       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-v4ngc\": pod kube-proxy-v4ngc is already assigned to node \"ha-190751-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-v4ngc" node="ha-190751-m04"
	E0916 13:40:46.111635       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1bfac972-00f2-440b-8577-132ebf2ef8fa(kube-system/kube-proxy-v4ngc) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-v4ngc"
	E0916 13:40:46.111674       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-v4ngc\": pod kube-proxy-v4ngc is already assigned to node \"ha-190751-m04\"" pod="kube-system/kube-proxy-v4ngc"
	I0916 13:40:46.111701       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-v4ngc" node="ha-190751-m04"
	E0916 13:40:46.136509       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-9nmfv\": pod kindnet-9nmfv is already assigned to node \"ha-190751-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-9nmfv" node="ha-190751-m04"
	E0916 13:40:46.136581       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a53af4e2-ffdc-4e32-8f97-f0b2684145be(kube-system/kindnet-9nmfv) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-9nmfv"
	E0916 13:40:46.136599       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-9nmfv\": pod kindnet-9nmfv is already assigned to node \"ha-190751-m04\"" pod="kube-system/kindnet-9nmfv"
	I0916 13:40:46.136617       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-9nmfv" node="ha-190751-m04"
	
	
	==> kubelet <==
	Sep 16 13:42:59 ha-190751 kubelet[1315]: E0916 13:42:59.737464    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494179737167298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:42:59 ha-190751 kubelet[1315]: E0916 13:42:59.737550    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494179737167298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:43:09 ha-190751 kubelet[1315]: E0916 13:43:09.738645    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494189738384757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:43:09 ha-190751 kubelet[1315]: E0916 13:43:09.739025    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494189738384757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:43:19 ha-190751 kubelet[1315]: E0916 13:43:19.740916    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494199740215330,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:43:19 ha-190751 kubelet[1315]: E0916 13:43:19.741381    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494199740215330,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:43:29 ha-190751 kubelet[1315]: E0916 13:43:29.746273    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494209745740647,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:43:29 ha-190751 kubelet[1315]: E0916 13:43:29.746714    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494209745740647,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:43:39 ha-190751 kubelet[1315]: E0916 13:43:39.648884    1315 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 13:43:39 ha-190751 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 13:43:39 ha-190751 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 13:43:39 ha-190751 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 13:43:39 ha-190751 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 13:43:39 ha-190751 kubelet[1315]: E0916 13:43:39.749540    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494219749060015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:43:39 ha-190751 kubelet[1315]: E0916 13:43:39.749585    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494219749060015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:43:49 ha-190751 kubelet[1315]: E0916 13:43:49.751757    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494229751259870,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:43:49 ha-190751 kubelet[1315]: E0916 13:43:49.751899    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494229751259870,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:43:59 ha-190751 kubelet[1315]: E0916 13:43:59.754523    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494239753916456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:43:59 ha-190751 kubelet[1315]: E0916 13:43:59.754560    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494239753916456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:44:09 ha-190751 kubelet[1315]: E0916 13:44:09.756067    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494249755586579,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:44:09 ha-190751 kubelet[1315]: E0916 13:44:09.756117    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494249755586579,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:44:19 ha-190751 kubelet[1315]: E0916 13:44:19.761606    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494259758255097,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:44:19 ha-190751 kubelet[1315]: E0916 13:44:19.761754    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494259758255097,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:44:29 ha-190751 kubelet[1315]: E0916 13:44:29.764334    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494269763904919,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:44:29 ha-190751 kubelet[1315]: E0916 13:44:29.764354    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494269763904919,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-190751 -n ha-190751
helpers_test.go:261: (dbg) Run:  kubectl --context ha-190751 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (47.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (402.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-190751 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-190751 -v=7 --alsologtostderr
E0916 13:45:50.206463  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
E0916 13:46:17.909487  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-190751 -v=7 --alsologtostderr: exit status 82 (2m1.840763723s)

                                                
                                                
-- stdout --
	* Stopping node "ha-190751-m04"  ...
	* Stopping node "ha-190751-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 13:44:37.780596  740766 out.go:345] Setting OutFile to fd 1 ...
	I0916 13:44:37.780740  740766 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:44:37.780754  740766 out.go:358] Setting ErrFile to fd 2...
	I0916 13:44:37.780762  740766 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:44:37.780967  740766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
	I0916 13:44:37.781226  740766 out.go:352] Setting JSON to false
	I0916 13:44:37.781350  740766 mustload.go:65] Loading cluster: ha-190751
	I0916 13:44:37.781841  740766 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:44:37.781939  740766 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/config.json ...
	I0916 13:44:37.782137  740766 mustload.go:65] Loading cluster: ha-190751
	I0916 13:44:37.782322  740766 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:44:37.782355  740766 stop.go:39] StopHost: ha-190751-m04
	I0916 13:44:37.782770  740766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:37.782807  740766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:37.797307  740766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44655
	I0916 13:44:37.797824  740766 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:37.798381  740766 main.go:141] libmachine: Using API Version  1
	I0916 13:44:37.798403  740766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:37.798710  740766 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:37.801267  740766 out.go:177] * Stopping node "ha-190751-m04"  ...
	I0916 13:44:37.802442  740766 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0916 13:44:37.802469  740766 main.go:141] libmachine: (ha-190751-m04) Calling .DriverName
	I0916 13:44:37.802682  740766 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0916 13:44:37.802706  740766 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHHostname
	I0916 13:44:37.805184  740766 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:44:37.805719  740766 main.go:141] libmachine: (ha-190751-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c5:44", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:40:29 +0000 UTC Type:0 Mac:52:54:00:46:c5:44 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-190751-m04 Clientid:01:52:54:00:46:c5:44}
	I0916 13:44:37.805747  740766 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:44:37.805903  740766 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHPort
	I0916 13:44:37.806083  740766 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHKeyPath
	I0916 13:44:37.806249  740766 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHUsername
	I0916 13:44:37.806368  740766 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m04/id_rsa Username:docker}
	I0916 13:44:37.892151  740766 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0916 13:44:37.944273  740766 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0916 13:44:37.996572  740766 main.go:141] libmachine: Stopping "ha-190751-m04"...
	I0916 13:44:37.996606  740766 main.go:141] libmachine: (ha-190751-m04) Calling .GetState
	I0916 13:44:37.998174  740766 main.go:141] libmachine: (ha-190751-m04) Calling .Stop
	I0916 13:44:38.001533  740766 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 0/120
	I0916 13:44:39.169516  740766 main.go:141] libmachine: (ha-190751-m04) Calling .GetState
	I0916 13:44:39.170843  740766 main.go:141] libmachine: Machine "ha-190751-m04" was stopped.
	I0916 13:44:39.170863  740766 stop.go:75] duration metric: took 1.368424239s to stop
	I0916 13:44:39.170883  740766 stop.go:39] StopHost: ha-190751-m03
	I0916 13:44:39.171171  740766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:44:39.171210  740766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:44:39.185558  740766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44521
	I0916 13:44:39.185952  740766 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:44:39.186482  740766 main.go:141] libmachine: Using API Version  1
	I0916 13:44:39.186505  740766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:44:39.186891  740766 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:44:39.188710  740766 out.go:177] * Stopping node "ha-190751-m03"  ...
	I0916 13:44:39.189602  740766 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0916 13:44:39.189628  740766 main.go:141] libmachine: (ha-190751-m03) Calling .DriverName
	I0916 13:44:39.189828  740766 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0916 13:44:39.189852  740766 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHHostname
	I0916 13:44:39.192524  740766 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:44:39.192948  740766 main.go:141] libmachine: (ha-190751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:4e:0a", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:39:06 +0000 UTC Type:0 Mac:52:54:00:0e:4e:0a Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-190751-m03 Clientid:01:52:54:00:0e:4e:0a}
	I0916 13:44:39.192981  740766 main.go:141] libmachine: (ha-190751-m03) DBG | domain ha-190751-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:0e:4e:0a in network mk-ha-190751
	I0916 13:44:39.193097  740766 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHPort
	I0916 13:44:39.193256  740766 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHKeyPath
	I0916 13:44:39.193372  740766 main.go:141] libmachine: (ha-190751-m03) Calling .GetSSHUsername
	I0916 13:44:39.193518  740766 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m03/id_rsa Username:docker}
	I0916 13:44:39.276351  740766 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0916 13:44:39.328284  740766 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0916 13:44:39.381341  740766 main.go:141] libmachine: Stopping "ha-190751-m03"...
	I0916 13:44:39.381364  740766 main.go:141] libmachine: (ha-190751-m03) Calling .GetState
	I0916 13:44:39.382830  740766 main.go:141] libmachine: (ha-190751-m03) Calling .Stop
	I0916 13:44:39.386231  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 0/120
	I0916 13:44:40.388107  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 1/120
	I0916 13:44:41.389378  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 2/120
	I0916 13:44:42.390570  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 3/120
	I0916 13:44:43.391954  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 4/120
	I0916 13:44:44.393820  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 5/120
	I0916 13:44:45.395480  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 6/120
	I0916 13:44:46.397046  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 7/120
	I0916 13:44:47.398590  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 8/120
	I0916 13:44:48.400100  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 9/120
	I0916 13:44:49.401864  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 10/120
	I0916 13:44:50.403301  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 11/120
	I0916 13:44:51.404636  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 12/120
	I0916 13:44:52.406048  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 13/120
	I0916 13:44:53.407458  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 14/120
	I0916 13:44:54.409501  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 15/120
	I0916 13:44:55.410921  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 16/120
	I0916 13:44:56.412261  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 17/120
	I0916 13:44:57.413661  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 18/120
	I0916 13:44:58.415169  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 19/120
	I0916 13:44:59.416713  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 20/120
	I0916 13:45:00.418405  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 21/120
	I0916 13:45:01.420007  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 22/120
	I0916 13:45:02.421654  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 23/120
	I0916 13:45:03.422877  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 24/120
	I0916 13:45:04.425218  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 25/120
	I0916 13:45:05.426805  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 26/120
	I0916 13:45:06.428374  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 27/120
	I0916 13:45:07.429968  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 28/120
	I0916 13:45:08.431530  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 29/120
	I0916 13:45:09.433320  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 30/120
	I0916 13:45:10.435133  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 31/120
	I0916 13:45:11.436917  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 32/120
	I0916 13:45:12.438532  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 33/120
	I0916 13:45:13.440250  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 34/120
	I0916 13:45:14.441973  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 35/120
	I0916 13:45:15.444202  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 36/120
	I0916 13:45:16.445518  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 37/120
	I0916 13:45:17.446946  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 38/120
	I0916 13:45:18.448231  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 39/120
	I0916 13:45:19.450010  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 40/120
	I0916 13:45:20.452106  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 41/120
	I0916 13:45:21.453846  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 42/120
	I0916 13:45:22.456338  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 43/120
	I0916 13:45:23.457747  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 44/120
	I0916 13:45:24.459536  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 45/120
	I0916 13:45:25.460937  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 46/120
	I0916 13:45:26.462369  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 47/120
	I0916 13:45:27.464213  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 48/120
	I0916 13:45:28.465685  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 49/120
	I0916 13:45:29.467929  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 50/120
	I0916 13:45:30.469398  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 51/120
	I0916 13:45:31.470889  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 52/120
	I0916 13:45:32.472310  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 53/120
	I0916 13:45:33.474770  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 54/120
	I0916 13:45:34.476737  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 55/120
	I0916 13:45:35.478063  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 56/120
	I0916 13:45:36.480091  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 57/120
	I0916 13:45:37.481587  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 58/120
	I0916 13:45:38.483113  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 59/120
	I0916 13:45:39.484893  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 60/120
	I0916 13:45:40.486032  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 61/120
	I0916 13:45:41.488261  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 62/120
	I0916 13:45:42.489663  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 63/120
	I0916 13:45:43.491214  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 64/120
	I0916 13:45:44.493058  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 65/120
	I0916 13:45:45.494342  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 66/120
	I0916 13:45:46.495697  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 67/120
	I0916 13:45:47.497322  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 68/120
	I0916 13:45:48.498701  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 69/120
	I0916 13:45:49.500473  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 70/120
	I0916 13:45:50.501799  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 71/120
	I0916 13:45:51.503077  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 72/120
	I0916 13:45:52.504326  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 73/120
	I0916 13:45:53.505808  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 74/120
	I0916 13:45:54.507166  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 75/120
	I0916 13:45:55.508780  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 76/120
	I0916 13:45:56.510162  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 77/120
	I0916 13:45:57.511577  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 78/120
	I0916 13:45:58.512901  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 79/120
	I0916 13:45:59.514654  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 80/120
	I0916 13:46:00.516101  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 81/120
	I0916 13:46:01.517370  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 82/120
	I0916 13:46:02.518920  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 83/120
	I0916 13:46:03.520207  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 84/120
	I0916 13:46:04.521868  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 85/120
	I0916 13:46:05.523102  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 86/120
	I0916 13:46:06.524252  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 87/120
	I0916 13:46:07.525641  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 88/120
	I0916 13:46:08.526824  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 89/120
	I0916 13:46:09.527959  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 90/120
	I0916 13:46:10.529382  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 91/120
	I0916 13:46:11.530670  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 92/120
	I0916 13:46:12.532340  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 93/120
	I0916 13:46:13.533619  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 94/120
	I0916 13:46:14.535079  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 95/120
	I0916 13:46:15.536223  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 96/120
	I0916 13:46:16.537538  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 97/120
	I0916 13:46:17.538798  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 98/120
	I0916 13:46:18.540060  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 99/120
	I0916 13:46:19.541378  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 100/120
	I0916 13:46:20.543017  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 101/120
	I0916 13:46:21.544597  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 102/120
	I0916 13:46:22.545883  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 103/120
	I0916 13:46:23.547306  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 104/120
	I0916 13:46:24.548866  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 105/120
	I0916 13:46:25.550249  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 106/120
	I0916 13:46:26.551977  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 107/120
	I0916 13:46:27.553359  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 108/120
	I0916 13:46:28.554817  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 109/120
	I0916 13:46:29.556518  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 110/120
	I0916 13:46:30.557899  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 111/120
	I0916 13:46:31.559172  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 112/120
	I0916 13:46:32.560603  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 113/120
	I0916 13:46:33.562075  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 114/120
	I0916 13:46:34.563289  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 115/120
	I0916 13:46:35.564566  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 116/120
	I0916 13:46:36.565939  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 117/120
	I0916 13:46:37.567507  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 118/120
	I0916 13:46:38.569062  740766 main.go:141] libmachine: (ha-190751-m03) Waiting for machine to stop 119/120
	I0916 13:46:39.570007  740766 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0916 13:46:39.570076  740766 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0916 13:46:39.571932  740766 out.go:201] 
	W0916 13:46:39.573261  740766 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0916 13:46:39.573278  740766 out.go:270] * 
	* 
	W0916 13:46:39.576383  740766 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 13:46:39.577633  740766 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-190751 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-190751 --wait=true -v=7 --alsologtostderr
E0916 13:50:50.206532  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-190751 --wait=true -v=7 --alsologtostderr: (4m37.910111059s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-190751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-190751 -n ha-190751
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-190751 logs -n 25: (1.699591401s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-190751 cp ha-190751-m03:/home/docker/cp-test.txt                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m02:/home/docker/cp-test_ha-190751-m03_ha-190751-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n ha-190751-m02 sudo cat                                          | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | /home/docker/cp-test_ha-190751-m03_ha-190751-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-190751 cp ha-190751-m03:/home/docker/cp-test.txt                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04:/home/docker/cp-test_ha-190751-m03_ha-190751-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n ha-190751-m04 sudo cat                                          | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | /home/docker/cp-test_ha-190751-m03_ha-190751-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-190751 cp testdata/cp-test.txt                                                | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-190751 cp ha-190751-m04:/home/docker/cp-test.txt                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3557247571/001/cp-test_ha-190751-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-190751 cp ha-190751-m04:/home/docker/cp-test.txt                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751:/home/docker/cp-test_ha-190751-m04_ha-190751.txt                       |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n ha-190751 sudo cat                                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | /home/docker/cp-test_ha-190751-m04_ha-190751.txt                                 |           |         |         |                     |                     |
	| cp      | ha-190751 cp ha-190751-m04:/home/docker/cp-test.txt                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m02:/home/docker/cp-test_ha-190751-m04_ha-190751-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n ha-190751-m02 sudo cat                                          | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | /home/docker/cp-test_ha-190751-m04_ha-190751-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-190751 cp ha-190751-m04:/home/docker/cp-test.txt                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m03:/home/docker/cp-test_ha-190751-m04_ha-190751-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n ha-190751-m03 sudo cat                                          | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | /home/docker/cp-test_ha-190751-m04_ha-190751-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-190751 node stop m02 -v=7                                                     | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-190751 node start m02 -v=7                                                    | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:43 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-190751 -v=7                                                           | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:44 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-190751 -v=7                                                                | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:44 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-190751 --wait=true -v=7                                                    | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:46 UTC | 16 Sep 24 13:51 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-190751                                                                | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:51 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 13:46:39
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 13:46:39.625470  741236 out.go:345] Setting OutFile to fd 1 ...
	I0916 13:46:39.625596  741236 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:46:39.625607  741236 out.go:358] Setting ErrFile to fd 2...
	I0916 13:46:39.625613  741236 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:46:39.625873  741236 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
	I0916 13:46:39.626429  741236 out.go:352] Setting JSON to false
	I0916 13:46:39.627418  741236 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":12549,"bootTime":1726481851,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 13:46:39.627473  741236 start.go:139] virtualization: kvm guest
	I0916 13:46:39.629923  741236 out.go:177] * [ha-190751] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 13:46:39.631246  741236 notify.go:220] Checking for updates...
	I0916 13:46:39.631259  741236 out.go:177]   - MINIKUBE_LOCATION=19652
	I0916 13:46:39.632860  741236 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 13:46:39.634084  741236 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19652-713072/kubeconfig
	I0916 13:46:39.635303  741236 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 13:46:39.636770  741236 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 13:46:39.638068  741236 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 13:46:39.639574  741236 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:46:39.639665  741236 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 13:46:39.640167  741236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:46:39.640206  741236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:46:39.655838  741236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42369
	I0916 13:46:39.656221  741236 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:46:39.656853  741236 main.go:141] libmachine: Using API Version  1
	I0916 13:46:39.656876  741236 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:46:39.657261  741236 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:46:39.657437  741236 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:46:39.692639  741236 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 13:46:39.693625  741236 start.go:297] selected driver: kvm2
	I0916 13:46:39.693637  741236 start.go:901] validating driver "kvm2" against &{Name:ha-190751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-190751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.192 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 13:46:39.693800  741236 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 13:46:39.694123  741236 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 13:46:39.694199  741236 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19652-713072/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 13:46:39.708560  741236 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 13:46:39.709256  741236 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 13:46:39.709299  741236 cni.go:84] Creating CNI manager for ""
	I0916 13:46:39.709354  741236 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0916 13:46:39.709426  741236 start.go:340] cluster config:
	{Name:ha-190751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-190751 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.192 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 13:46:39.709629  741236 iso.go:125] acquiring lock: {Name:mk66d96ffbd424a8ca76a8604dfbe200d58305de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 13:46:39.711081  741236 out.go:177] * Starting "ha-190751" primary control-plane node in "ha-190751" cluster
	I0916 13:46:39.712059  741236 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 13:46:39.712097  741236 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 13:46:39.712108  741236 cache.go:56] Caching tarball of preloaded images
	I0916 13:46:39.712192  741236 preload.go:172] Found /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 13:46:39.712206  741236 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 13:46:39.712337  741236 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/config.json ...
	I0916 13:46:39.712539  741236 start.go:360] acquireMachinesLock for ha-190751: {Name:mke8f8f8ba61009cdea7a3d88b50b9f6ae6e1362 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 13:46:39.712638  741236 start.go:364] duration metric: took 79.689µs to acquireMachinesLock for "ha-190751"
	I0916 13:46:39.712657  741236 start.go:96] Skipping create...Using existing machine configuration
	I0916 13:46:39.712667  741236 fix.go:54] fixHost starting: 
	I0916 13:46:39.712934  741236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:46:39.712971  741236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:46:39.726630  741236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36535
	I0916 13:46:39.727045  741236 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:46:39.727509  741236 main.go:141] libmachine: Using API Version  1
	I0916 13:46:39.727528  741236 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:46:39.727885  741236 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:46:39.728112  741236 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:46:39.728254  741236 main.go:141] libmachine: (ha-190751) Calling .GetState
	I0916 13:46:39.729940  741236 fix.go:112] recreateIfNeeded on ha-190751: state=Running err=<nil>
	W0916 13:46:39.729962  741236 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 13:46:39.734762  741236 out.go:177] * Updating the running kvm2 "ha-190751" VM ...
	I0916 13:46:39.736168  741236 machine.go:93] provisionDockerMachine start ...
	I0916 13:46:39.736191  741236 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:46:39.736429  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:46:39.739024  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:39.739520  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:46:39.739554  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:39.739694  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:46:39.739882  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:46:39.740020  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:46:39.740157  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:46:39.740352  741236 main.go:141] libmachine: Using SSH client type: native
	I0916 13:46:39.740538  741236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0916 13:46:39.740549  741236 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 13:46:39.858852  741236 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-190751
	
	I0916 13:46:39.858886  741236 main.go:141] libmachine: (ha-190751) Calling .GetMachineName
	I0916 13:46:39.859131  741236 buildroot.go:166] provisioning hostname "ha-190751"
	I0916 13:46:39.859161  741236 main.go:141] libmachine: (ha-190751) Calling .GetMachineName
	I0916 13:46:39.859334  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:46:39.862113  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:39.862529  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:46:39.862550  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:39.862658  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:46:39.862820  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:46:39.862944  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:46:39.863059  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:46:39.863169  741236 main.go:141] libmachine: Using SSH client type: native
	I0916 13:46:39.863337  741236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0916 13:46:39.863348  741236 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-190751 && echo "ha-190751" | sudo tee /etc/hostname
	I0916 13:46:39.987108  741236 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-190751
	
	I0916 13:46:39.987141  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:46:39.989879  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:39.990289  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:46:39.990314  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:39.990550  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:46:39.990738  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:46:39.990899  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:46:39.991024  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:46:39.991166  741236 main.go:141] libmachine: Using SSH client type: native
	I0916 13:46:39.991344  741236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0916 13:46:39.991359  741236 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-190751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-190751/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-190751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 13:46:40.103358  741236 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 13:46:40.103394  741236 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19652-713072/.minikube CaCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19652-713072/.minikube}
	I0916 13:46:40.103422  741236 buildroot.go:174] setting up certificates
	I0916 13:46:40.103435  741236 provision.go:84] configureAuth start
	I0916 13:46:40.103453  741236 main.go:141] libmachine: (ha-190751) Calling .GetMachineName
	I0916 13:46:40.103720  741236 main.go:141] libmachine: (ha-190751) Calling .GetIP
	I0916 13:46:40.106488  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:40.106915  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:46:40.106942  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:40.107152  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:46:40.109253  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:40.109653  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:46:40.109700  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:40.109870  741236 provision.go:143] copyHostCerts
	I0916 13:46:40.109912  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem
	I0916 13:46:40.109956  741236 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem, removing ...
	I0916 13:46:40.109968  741236 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem
	I0916 13:46:40.110048  741236 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem (1082 bytes)
	I0916 13:46:40.110156  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem
	I0916 13:46:40.110182  741236 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem, removing ...
	I0916 13:46:40.110189  741236 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem
	I0916 13:46:40.110231  741236 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem (1123 bytes)
	I0916 13:46:40.110296  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem
	I0916 13:46:40.110319  741236 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem, removing ...
	I0916 13:46:40.110325  741236 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem
	I0916 13:46:40.110365  741236 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem (1679 bytes)
	I0916 13:46:40.110445  741236 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem org=jenkins.ha-190751 san=[127.0.0.1 192.168.39.94 ha-190751 localhost minikube]
	I0916 13:46:40.284286  741236 provision.go:177] copyRemoteCerts
	I0916 13:46:40.284349  741236 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 13:46:40.284381  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:46:40.286985  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:40.287309  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:46:40.287335  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:40.287493  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:46:40.287683  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:46:40.287832  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:46:40.287996  741236 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:46:40.376067  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 13:46:40.376143  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0916 13:46:40.400945  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 13:46:40.401028  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 13:46:40.427679  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 13:46:40.427738  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 13:46:40.451973  741236 provision.go:87] duration metric: took 348.52093ms to configureAuth
	I0916 13:46:40.451997  741236 buildroot.go:189] setting minikube options for container-runtime
	I0916 13:46:40.452230  741236 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:46:40.452331  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:46:40.455323  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:40.455765  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:46:40.455791  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:40.455917  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:46:40.456105  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:46:40.456305  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:46:40.456495  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:46:40.456659  741236 main.go:141] libmachine: Using SSH client type: native
	I0916 13:46:40.456857  741236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0916 13:46:40.456874  741236 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 13:48:11.229084  741236 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 13:48:11.229116  741236 machine.go:96] duration metric: took 1m31.492931394s to provisionDockerMachine
	I0916 13:48:11.229134  741236 start.go:293] postStartSetup for "ha-190751" (driver="kvm2")
	I0916 13:48:11.229147  741236 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 13:48:11.229224  741236 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:48:11.229607  741236 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 13:48:11.229646  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:48:11.232700  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:11.233147  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:48:11.233175  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:11.233322  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:48:11.233513  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:48:11.233682  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:48:11.233848  741236 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:48:11.320416  741236 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 13:48:11.324552  741236 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 13:48:11.324575  741236 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/addons for local assets ...
	I0916 13:48:11.324625  741236 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/files for local assets ...
	I0916 13:48:11.324710  741236 filesync.go:149] local asset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> 7205442.pem in /etc/ssl/certs
	I0916 13:48:11.324722  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> /etc/ssl/certs/7205442.pem
	I0916 13:48:11.324827  741236 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 13:48:11.333691  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 13:48:11.356643  741236 start.go:296] duration metric: took 127.495158ms for postStartSetup
	I0916 13:48:11.356684  741236 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:48:11.356935  741236 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0916 13:48:11.356962  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:48:11.359351  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:11.359712  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:48:11.359784  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:11.359844  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:48:11.360021  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:48:11.360156  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:48:11.360318  741236 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	W0916 13:48:11.443008  741236 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0916 13:48:11.443035  741236 fix.go:56] duration metric: took 1m31.730369023s for fixHost
	I0916 13:48:11.443054  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:48:11.445780  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:11.446128  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:48:11.446162  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:11.446231  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:48:11.446447  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:48:11.446565  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:48:11.446727  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:48:11.446867  741236 main.go:141] libmachine: Using SSH client type: native
	I0916 13:48:11.447048  741236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0916 13:48:11.447059  741236 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 13:48:11.554217  741236 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726494491.520423446
	
	I0916 13:48:11.554245  741236 fix.go:216] guest clock: 1726494491.520423446
	I0916 13:48:11.554255  741236 fix.go:229] Guest: 2024-09-16 13:48:11.520423446 +0000 UTC Remote: 2024-09-16 13:48:11.443041663 +0000 UTC m=+91.854073528 (delta=77.381783ms)
	I0916 13:48:11.554281  741236 fix.go:200] guest clock delta is within tolerance: 77.381783ms
	I0916 13:48:11.554288  741236 start.go:83] releasing machines lock for "ha-190751", held for 1m31.841639874s
	I0916 13:48:11.554312  741236 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:48:11.554534  741236 main.go:141] libmachine: (ha-190751) Calling .GetIP
	I0916 13:48:11.557156  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:11.557580  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:48:11.557603  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:11.557741  741236 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:48:11.558240  741236 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:48:11.558409  741236 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:48:11.558496  741236 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 13:48:11.558546  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:48:11.558661  741236 ssh_runner.go:195] Run: cat /version.json
	I0916 13:48:11.558689  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:48:11.561199  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:11.561342  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:11.561601  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:48:11.561627  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:11.561732  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:48:11.561892  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:48:11.561919  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:48:11.561937  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:11.562073  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:48:11.562081  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:48:11.562237  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:48:11.562235  741236 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:48:11.562371  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:48:11.562481  741236 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:48:11.674738  741236 ssh_runner.go:195] Run: systemctl --version
	I0916 13:48:11.680449  741236 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 13:48:11.841765  741236 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 13:48:11.849852  741236 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 13:48:11.849912  741236 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 13:48:11.859586  741236 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 13:48:11.859606  741236 start.go:495] detecting cgroup driver to use...
	I0916 13:48:11.859654  741236 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 13:48:11.877847  741236 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 13:48:11.892017  741236 docker.go:217] disabling cri-docker service (if available) ...
	I0916 13:48:11.892090  741236 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 13:48:11.906021  741236 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 13:48:11.919462  741236 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 13:48:12.065828  741236 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 13:48:12.217509  741236 docker.go:233] disabling docker service ...
	I0916 13:48:12.217617  741236 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 13:48:12.234145  741236 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 13:48:12.248297  741236 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 13:48:12.388682  741236 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 13:48:12.528445  741236 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 13:48:12.542085  741236 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 13:48:12.559524  741236 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 13:48:12.559590  741236 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:48:12.571961  741236 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 13:48:12.572018  741236 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:48:12.583400  741236 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:48:12.594211  741236 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:48:12.605692  741236 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 13:48:12.615785  741236 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:48:12.625651  741236 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:48:12.636001  741236 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:48:12.645941  741236 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 13:48:12.655062  741236 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 13:48:12.663929  741236 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 13:48:12.807271  741236 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 13:48:13.018807  741236 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 13:48:13.018881  741236 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 13:48:13.023794  741236 start.go:563] Will wait 60s for crictl version
	I0916 13:48:13.023841  741236 ssh_runner.go:195] Run: which crictl
	I0916 13:48:13.027625  741236 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 13:48:13.074513  741236 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 13:48:13.074611  741236 ssh_runner.go:195] Run: crio --version
	I0916 13:48:13.104737  741236 ssh_runner.go:195] Run: crio --version
	I0916 13:48:13.135324  741236 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 13:48:13.136654  741236 main.go:141] libmachine: (ha-190751) Calling .GetIP
	I0916 13:48:13.139202  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:13.139568  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:48:13.139597  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:13.139779  741236 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 13:48:13.144424  741236 kubeadm.go:883] updating cluster {Name:ha-190751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-190751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.192 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 13:48:13.144568  741236 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 13:48:13.144632  741236 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 13:48:13.186085  741236 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 13:48:13.186106  741236 crio.go:433] Images already preloaded, skipping extraction
	I0916 13:48:13.186159  741236 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 13:48:13.216653  741236 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 13:48:13.216676  741236 cache_images.go:84] Images are preloaded, skipping loading
	I0916 13:48:13.216689  741236 kubeadm.go:934] updating node { 192.168.39.94 8443 v1.31.1 crio true true} ...
	I0916 13:48:13.216801  741236 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-190751 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-190751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 13:48:13.216863  741236 ssh_runner.go:195] Run: crio config
	I0916 13:48:13.260506  741236 cni.go:84] Creating CNI manager for ""
	I0916 13:48:13.260526  741236 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0916 13:48:13.260537  741236 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 13:48:13.260559  741236 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.94 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-190751 NodeName:ha-190751 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 13:48:13.260698  741236 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.94
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-190751"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 13:48:13.260719  741236 kube-vip.go:115] generating kube-vip config ...
	I0916 13:48:13.260759  741236 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 13:48:13.272030  741236 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 13:48:13.272144  741236 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 13:48:13.272196  741236 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 13:48:13.281569  741236 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 13:48:13.281649  741236 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 13:48:13.290638  741236 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0916 13:48:13.306198  741236 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 13:48:13.321505  741236 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0916 13:48:13.337208  741236 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0916 13:48:13.353736  741236 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 13:48:13.357394  741236 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 13:48:13.502995  741236 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 13:48:13.517369  741236 certs.go:68] Setting up /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751 for IP: 192.168.39.94
	I0916 13:48:13.517391  741236 certs.go:194] generating shared ca certs ...
	I0916 13:48:13.517412  741236 certs.go:226] acquiring lock for ca certs: {Name:mk25b35916ff3ff3777938e3e2b7794965f8a707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:48:13.517602  741236 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key
	I0916 13:48:13.517660  741236 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key
	I0916 13:48:13.517745  741236 certs.go:256] generating profile certs ...
	I0916 13:48:13.517932  741236 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.key
	I0916 13:48:13.517968  741236 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.ef12b01e
	I0916 13:48:13.517984  741236 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.ef12b01e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.94 192.168.39.192 192.168.39.134 192.168.39.254]
	I0916 13:48:13.658856  741236 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.ef12b01e ...
	I0916 13:48:13.658887  741236 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.ef12b01e: {Name:mk5128865dd3ed5cf8f80f0e3504eee8210f3b37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:48:13.659056  741236 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.ef12b01e ...
	I0916 13:48:13.659066  741236 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.ef12b01e: {Name:mk2b0a5cb0c64f285ce1d11db681fd7632720418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:48:13.659141  741236 certs.go:381] copying /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.ef12b01e -> /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt
	I0916 13:48:13.659281  741236 certs.go:385] copying /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.ef12b01e -> /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key
	I0916 13:48:13.659413  741236 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key
	I0916 13:48:13.659428  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 13:48:13.659441  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 13:48:13.659452  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 13:48:13.659476  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 13:48:13.659499  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 13:48:13.659509  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 13:48:13.659523  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 13:48:13.659562  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 13:48:13.659619  741236 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem (1338 bytes)
	W0916 13:48:13.659650  741236 certs.go:480] ignoring /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544_empty.pem, impossibly tiny 0 bytes
	I0916 13:48:13.659660  741236 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 13:48:13.659681  741236 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem (1082 bytes)
	I0916 13:48:13.659702  741236 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem (1123 bytes)
	I0916 13:48:13.659723  741236 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem (1679 bytes)
	I0916 13:48:13.659760  741236 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 13:48:13.659811  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem -> /usr/share/ca-certificates/720544.pem
	I0916 13:48:13.659828  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> /usr/share/ca-certificates/7205442.pem
	I0916 13:48:13.659840  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:48:13.660426  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 13:48:13.684690  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 13:48:13.706838  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 13:48:13.729327  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 13:48:13.751522  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0916 13:48:13.774140  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 13:48:13.797072  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 13:48:13.818969  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 13:48:13.861887  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem --> /usr/share/ca-certificates/720544.pem (1338 bytes)
	I0916 13:48:13.886393  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /usr/share/ca-certificates/7205442.pem (1708 bytes)
	I0916 13:48:13.908526  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 13:48:13.930618  741236 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 13:48:13.945997  741236 ssh_runner.go:195] Run: openssl version
	I0916 13:48:13.951602  741236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/720544.pem && ln -fs /usr/share/ca-certificates/720544.pem /etc/ssl/certs/720544.pem"
	I0916 13:48:13.961514  741236 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/720544.pem
	I0916 13:48:13.965676  741236 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 13:33 /usr/share/ca-certificates/720544.pem
	I0916 13:48:13.965710  741236 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/720544.pem
	I0916 13:48:13.971089  741236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/720544.pem /etc/ssl/certs/51391683.0"
	I0916 13:48:13.979677  741236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7205442.pem && ln -fs /usr/share/ca-certificates/7205442.pem /etc/ssl/certs/7205442.pem"
	I0916 13:48:13.990388  741236 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7205442.pem
	I0916 13:48:13.994750  741236 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 13:33 /usr/share/ca-certificates/7205442.pem
	I0916 13:48:13.994795  741236 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7205442.pem
	I0916 13:48:14.000058  741236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7205442.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 13:48:14.008608  741236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 13:48:14.018567  741236 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:48:14.023148  741236 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 12:53 /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:48:14.023190  741236 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:48:14.028363  741236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 13:48:14.036850  741236 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 13:48:14.041422  741236 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 13:48:14.047282  741236 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 13:48:14.052398  741236 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 13:48:14.057515  741236 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 13:48:14.062672  741236 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 13:48:14.067782  741236 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 13:48:14.073033  741236 kubeadm.go:392] StartCluster: {Name:ha-190751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-190751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.192 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 13:48:14.073153  741236 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 13:48:14.073206  741236 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 13:48:14.111273  741236 cri.go:89] found id: "d8632a302625a774aeda4dc20b6685a2590ebfab7e534fcd2a864b4d7c73f4f1"
	I0916 13:48:14.111295  741236 cri.go:89] found id: "a9fd590fef01ea67abfba5099c5976e0f9a7071dc1d5440c355734d0d2c99e17"
	I0916 13:48:14.111301  741236 cri.go:89] found id: "653d7d20fc0c420b88d6cf3b91d680ee591f56c0e7d97b5ab4b0f7a32bd46d45"
	I0916 13:48:14.111321  741236 cri.go:89] found id: "e33b03d2f6fce87730d338d716b579f61fa7dca1205bac35abaf88257659f781"
	I0916 13:48:14.111326  741236 cri.go:89] found id: "5597ff6fa9128f07d2dc3f058b9b448395d0989aa657629ef5c6819b33cc8cb7"
	I0916 13:48:14.111330  741236 cri.go:89] found id: "85e2956fe35237a31eb3777a4db47ef14cfd27c1fa6b47b8e68d421b6f0388b0"
	I0916 13:48:14.111334  741236 cri.go:89] found id: "d2fb4efd07b928023ce922b08d4d29585e3080441cdb212649ac1338243874ee"
	I0916 13:48:14.111337  741236 cri.go:89] found id: "876c9f45c384802a996dd22d917975d86b875cbde33520b6bfb8ec6f84b39629"
	I0916 13:48:14.111340  741236 cri.go:89] found id: "ce48d6fe2a10977168e6aa4159b5fa451fbf190ee313d8d6500cf399312b4061"
	I0916 13:48:14.111345  741236 cri.go:89] found id: "0cd93f6d25b96fcafeadbe4368203439d003e6e60832e2405318039bac48cd90"
	I0916 13:48:14.111351  741236 cri.go:89] found id: "13c8d0e1fdcbee87a87cace216d5dc79bc82e8045e7d582390ca41efdbcadcad"
	I0916 13:48:14.111354  741236 cri.go:89] found id: "2cb375fdf3e21c70ce4d6d7afaeb7e323643bddc06490de3e9e9973f9817f85b"
	I0916 13:48:14.111360  741236 cri.go:89] found id: "3d2fdc916e364191824e8eeeeebd2bd4bde311ec642553730ff1fa83d5ae6b3c"
	I0916 13:48:14.111363  741236 cri.go:89] found id: ""
	I0916 13:48:14.111412  741236 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 16 13:51:18 ha-190751 crio[3513]: time="2024-09-16 13:51:18.199472159Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494678199446519,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80b4355c-ce78-4093-9168-751401fb9b70 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:51:18 ha-190751 crio[3513]: time="2024-09-16 13:51:18.201415421Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee39b7cc-a5e2-4ce4-a395-02e6e7a72ccb name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:51:18 ha-190751 crio[3513]: time="2024-09-16 13:51:18.201474736Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee39b7cc-a5e2-4ce4-a395-02e6e7a72ccb name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:51:18 ha-190751 crio[3513]: time="2024-09-16 13:51:18.202041400Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68a17947275a885ac1338458515d3c3815f81f0646c7d0f59f5025fbcb246718,PodSandboxId:de7aec1e5e47fa83adcd5bb2d56dac6fedc270654003736649663b10a039a454,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726494585635614611,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f01b81dc-2ff8-41de-8c63-e09a0ead6545,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e02e885f6ff0ac9a43a7b7198e00c6c903eded4e3272b993d0acc2558a5663a,PodSandboxId:8d104a92ea828fa1eda8b699644db2047268468a477a700b9381c6d82c22dfd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726494573633416875,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae495349ac02bb4b5addcdcea0d25715,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d9edc7df5a2360bfab4a65fc63e9ce882e388f183f784c2b4e126b6614717bb,PodSandboxId:99daa5073d1f26dc7805036aa4f09150d16ea0b02f568549c7f09f415301efb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726494543639297101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a42ea5903905c847366e72d48200db,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98b64476badf43fdcb45977d256ecaf7ffa42fe0af3392b766efd09e5fac748c,PodSandboxId:de7aec1e5e47fa83adcd5bb2d56dac6fedc270654003736649663b10a039a454,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726494534634372815,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f01b81dc-2ff8-41de-8c63-e09a0ead6545,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f8a3dd5830446e4f24811190e3c69e2d5d610cf3df928a2966bd194e75a531,PodSandboxId:467454c1433729c870ed967292fa6888f5f05a6a9eeb462cbf661e9f65239b97,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726494530923508312,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lsqcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa0c38d7-fa7a-4b02-b417-1da8e210cc78,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c8c831977cdcd20f220c683022ec7858cf50dbcd786c60fdc6155f6bc7eb81,PodSandboxId:8d104a92ea828fa1eda8b699644db2047268468a477a700b9381c6d82c22dfd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726494530355993292,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae495349ac02bb4b5addcdcea0d25715,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fbd2ab36bd424a75a296c4c526c16aacceafc9d6282b06814fe7cf0b04a119,PodSandboxId:21a6ffa293c766ba33093469c1a549a649e2513f0ddd412dd71fb4efeab0a89e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726494514007319937,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67e8e356afe10ade9e2bb9eb90e11528,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aff424a1e0a36fa17a28c65bed39b131cd77e229d6a5125231b41cedffa463c9,PodSandboxId:2f8de5f3a3283ad6da2d0d331b8846f49c7ff1f7d6346ebee487ad3da80c0874,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726494497835612345,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gpb96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb699362-acf1-471c-8b39-8a7498a7da52,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:a2d90a9f3454116e0b4e843df06f66a7c76a063bd4dbea0132cc1d935208c271,PodSandboxId:2bf212396f86e717f6511c6f01c2db4695c6c9be6d50491e90c14829e49b1091,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726494497876326013,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9lw8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db48b82a19ccdd1356d28e98d010f8a3d3f0927a86408f5d028b655b376dc8ad,PodSandboxId:474617d62056c3c002f48a7616263236dc22f673d5d12a09335e8a051dcc7081,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726494497772321052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gzkpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0ada83-1020-4bd4-be70-9a1a5972ff59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{
\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f88a0c0f7b2943ff725145bc499f835202476a9fca62dec354a893db03f49b8f,PodSandboxId:b0afa0ba4326dc04f1f27aea371594cab14a18fb3648ea6e23bd80bdd0002769,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726494497609209176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-190
751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9922f803bd7b5d0ba2ffa0c06886b9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9d5a75c9054b2414f4c5763f394eb7a72f95e0360e67bf55e3b3ded96ccbd6e,PodSandboxId:379b8517e1b921e22e276aa378b254b916bd475f3463191fcc4436f52ece84b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726494497466465541,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9d7kt,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: ba8c34d1-5931-4e70-8d01-798817397f78,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56e43e1330c7a560e64d2d1d8d2047c7993487a6de8d12b05d4867bc2484e09d,PodSandboxId:f99d150e833d7697431bb77854b9749f955d6285cef08623d43511416ae3f61d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726494497538375864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2cc73ce1a8f746
d45b3276bee469d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d509e2938a032069254cbcb0c924947c72c27bc23984e04701fbe6caef46adad,PodSandboxId:99daa5073d1f26dc7805036aa4f09150d16ea0b02f568549c7f09f415301efb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726494497414585430,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a42ea5903905c847366e72d48200db,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ff16b4cf488d896605be284a1159f722aa4cc147bb74a8eeaf47bee3912ead0,PodSandboxId:70804a075dc34bfcfcd945e41bc9b9b50887dfbed8832df3453a49df237f3a10,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726494009959655688,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lsqcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa0c38d7-fa7a-4b02-b417-1da8e210cc78,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5597ff6fa9128f07d2dc3f058b9b448395d0989aa657629ef5c6819b33cc8cb7,PodSandboxId:faf5324ae84ec325360c692d7e663f4a36e234c8403a4e72f80d57211acd5a2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726493905851195361,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9lw8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e33b03d2f6fce87730d338d716b579f61fa7dca1205bac35abaf88257659f781,PodSandboxId:d74b47a92fc73e9c9e0646cddd475b1d9c4c084abec46863815d97b0f05bd238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726493905853300694,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gzkpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0ada83-1020-4bd4-be70-9a1a5972ff59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2fb4efd07b928023ce922b08d4d29585e3080441cdb212649ac1338243874ee,PodSandboxId:e227eb76eed28456da60c41632338b32cbb3ec7c34407c7745860a265438ce7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726493863271605664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9d7kt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba8c34d1-5931-4e70-8d01-798817397f78,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c9f45c384802a996dd22d917975d86b875cbde33520b6bfb8ec6f84b39629,PodSandboxId:06c5005bbb7151b021f0bc1b7f3e8818b673f7067ec8acf264d4919832abfb8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726493862131373043,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gpb96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb699362-acf1-471c-8b39-8a7498a7da52,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd93f6d25b96fcafeadbe4368203439d003e6e60832e2405318039bac48cd90,PodSandboxId:235857e1be3ea44c435d98b63c4e4bf947b816eb9121b4867264d82144ce5cc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726493850593271561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2cc73ce1a8f746d45b3276bee469d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2fdc916e364191824e8eeeeebd2bd4bde311ec642553730ff1fa83d5ae6b3c,PodSandboxId:2b68d5be2f2cfa03aea5cc5c13039a8c244e9a8260f12dd48010acb6164d6332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726493850404356901,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9922f803bd7b5d0ba2ffa0c06886b9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee39b7cc-a5e2-4ce4-a395-02e6e7a72ccb name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:51:18 ha-190751 crio[3513]: time="2024-09-16 13:51:18.249354754Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6e9b196d-319b-4413-a615-9234cb58643a name=/runtime.v1.RuntimeService/Version
	Sep 16 13:51:18 ha-190751 crio[3513]: time="2024-09-16 13:51:18.249459326Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6e9b196d-319b-4413-a615-9234cb58643a name=/runtime.v1.RuntimeService/Version
	Sep 16 13:51:18 ha-190751 crio[3513]: time="2024-09-16 13:51:18.250537542Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b1314e3-0075-44af-9cf5-0d0e2b6cd0b2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:51:18 ha-190751 crio[3513]: time="2024-09-16 13:51:18.251128560Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494678251106625,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b1314e3-0075-44af-9cf5-0d0e2b6cd0b2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:51:18 ha-190751 crio[3513]: time="2024-09-16 13:51:18.251919714Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60f3b203-9017-448d-992c-b3a2883aa1c8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:51:18 ha-190751 crio[3513]: time="2024-09-16 13:51:18.251999786Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60f3b203-9017-448d-992c-b3a2883aa1c8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:51:18 ha-190751 crio[3513]: time="2024-09-16 13:51:18.252425261Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68a17947275a885ac1338458515d3c3815f81f0646c7d0f59f5025fbcb246718,PodSandboxId:de7aec1e5e47fa83adcd5bb2d56dac6fedc270654003736649663b10a039a454,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726494585635614611,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f01b81dc-2ff8-41de-8c63-e09a0ead6545,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e02e885f6ff0ac9a43a7b7198e00c6c903eded4e3272b993d0acc2558a5663a,PodSandboxId:8d104a92ea828fa1eda8b699644db2047268468a477a700b9381c6d82c22dfd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726494573633416875,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae495349ac02bb4b5addcdcea0d25715,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d9edc7df5a2360bfab4a65fc63e9ce882e388f183f784c2b4e126b6614717bb,PodSandboxId:99daa5073d1f26dc7805036aa4f09150d16ea0b02f568549c7f09f415301efb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726494543639297101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a42ea5903905c847366e72d48200db,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98b64476badf43fdcb45977d256ecaf7ffa42fe0af3392b766efd09e5fac748c,PodSandboxId:de7aec1e5e47fa83adcd5bb2d56dac6fedc270654003736649663b10a039a454,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726494534634372815,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f01b81dc-2ff8-41de-8c63-e09a0ead6545,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f8a3dd5830446e4f24811190e3c69e2d5d610cf3df928a2966bd194e75a531,PodSandboxId:467454c1433729c870ed967292fa6888f5f05a6a9eeb462cbf661e9f65239b97,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726494530923508312,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lsqcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa0c38d7-fa7a-4b02-b417-1da8e210cc78,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c8c831977cdcd20f220c683022ec7858cf50dbcd786c60fdc6155f6bc7eb81,PodSandboxId:8d104a92ea828fa1eda8b699644db2047268468a477a700b9381c6d82c22dfd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726494530355993292,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae495349ac02bb4b5addcdcea0d25715,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fbd2ab36bd424a75a296c4c526c16aacceafc9d6282b06814fe7cf0b04a119,PodSandboxId:21a6ffa293c766ba33093469c1a549a649e2513f0ddd412dd71fb4efeab0a89e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726494514007319937,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67e8e356afe10ade9e2bb9eb90e11528,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aff424a1e0a36fa17a28c65bed39b131cd77e229d6a5125231b41cedffa463c9,PodSandboxId:2f8de5f3a3283ad6da2d0d331b8846f49c7ff1f7d6346ebee487ad3da80c0874,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726494497835612345,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gpb96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb699362-acf1-471c-8b39-8a7498a7da52,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:a2d90a9f3454116e0b4e843df06f66a7c76a063bd4dbea0132cc1d935208c271,PodSandboxId:2bf212396f86e717f6511c6f01c2db4695c6c9be6d50491e90c14829e49b1091,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726494497876326013,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9lw8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db48b82a19ccdd1356d28e98d010f8a3d3f0927a86408f5d028b655b376dc8ad,PodSandboxId:474617d62056c3c002f48a7616263236dc22f673d5d12a09335e8a051dcc7081,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726494497772321052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gzkpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0ada83-1020-4bd4-be70-9a1a5972ff59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{
\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f88a0c0f7b2943ff725145bc499f835202476a9fca62dec354a893db03f49b8f,PodSandboxId:b0afa0ba4326dc04f1f27aea371594cab14a18fb3648ea6e23bd80bdd0002769,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726494497609209176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-190
751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9922f803bd7b5d0ba2ffa0c06886b9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9d5a75c9054b2414f4c5763f394eb7a72f95e0360e67bf55e3b3ded96ccbd6e,PodSandboxId:379b8517e1b921e22e276aa378b254b916bd475f3463191fcc4436f52ece84b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726494497466465541,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9d7kt,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: ba8c34d1-5931-4e70-8d01-798817397f78,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56e43e1330c7a560e64d2d1d8d2047c7993487a6de8d12b05d4867bc2484e09d,PodSandboxId:f99d150e833d7697431bb77854b9749f955d6285cef08623d43511416ae3f61d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726494497538375864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2cc73ce1a8f746
d45b3276bee469d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d509e2938a032069254cbcb0c924947c72c27bc23984e04701fbe6caef46adad,PodSandboxId:99daa5073d1f26dc7805036aa4f09150d16ea0b02f568549c7f09f415301efb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726494497414585430,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a42ea5903905c847366e72d48200db,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ff16b4cf488d896605be284a1159f722aa4cc147bb74a8eeaf47bee3912ead0,PodSandboxId:70804a075dc34bfcfcd945e41bc9b9b50887dfbed8832df3453a49df237f3a10,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726494009959655688,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lsqcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa0c38d7-fa7a-4b02-b417-1da8e210cc78,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5597ff6fa9128f07d2dc3f058b9b448395d0989aa657629ef5c6819b33cc8cb7,PodSandboxId:faf5324ae84ec325360c692d7e663f4a36e234c8403a4e72f80d57211acd5a2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726493905851195361,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9lw8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e33b03d2f6fce87730d338d716b579f61fa7dca1205bac35abaf88257659f781,PodSandboxId:d74b47a92fc73e9c9e0646cddd475b1d9c4c084abec46863815d97b0f05bd238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726493905853300694,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gzkpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0ada83-1020-4bd4-be70-9a1a5972ff59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2fb4efd07b928023ce922b08d4d29585e3080441cdb212649ac1338243874ee,PodSandboxId:e227eb76eed28456da60c41632338b32cbb3ec7c34407c7745860a265438ce7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726493863271605664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9d7kt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba8c34d1-5931-4e70-8d01-798817397f78,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c9f45c384802a996dd22d917975d86b875cbde33520b6bfb8ec6f84b39629,PodSandboxId:06c5005bbb7151b021f0bc1b7f3e8818b673f7067ec8acf264d4919832abfb8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726493862131373043,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gpb96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb699362-acf1-471c-8b39-8a7498a7da52,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd93f6d25b96fcafeadbe4368203439d003e6e60832e2405318039bac48cd90,PodSandboxId:235857e1be3ea44c435d98b63c4e4bf947b816eb9121b4867264d82144ce5cc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726493850593271561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2cc73ce1a8f746d45b3276bee469d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2fdc916e364191824e8eeeeebd2bd4bde311ec642553730ff1fa83d5ae6b3c,PodSandboxId:2b68d5be2f2cfa03aea5cc5c13039a8c244e9a8260f12dd48010acb6164d6332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726493850404356901,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9922f803bd7b5d0ba2ffa0c06886b9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=60f3b203-9017-448d-992c-b3a2883aa1c8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:51:18 ha-190751 crio[3513]: time="2024-09-16 13:51:18.308984903Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ac7424b8-5707-4e51-925c-0aceaae291ef name=/runtime.v1.RuntimeService/Version
	Sep 16 13:51:18 ha-190751 crio[3513]: time="2024-09-16 13:51:18.309121029Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ac7424b8-5707-4e51-925c-0aceaae291ef name=/runtime.v1.RuntimeService/Version
	Sep 16 13:51:18 ha-190751 crio[3513]: time="2024-09-16 13:51:18.310374884Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=562f3e2d-9f0d-4c0e-86a9-5725c27a0078 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:51:18 ha-190751 crio[3513]: time="2024-09-16 13:51:18.310974158Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494678310820767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=562f3e2d-9f0d-4c0e-86a9-5725c27a0078 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:51:18 ha-190751 crio[3513]: time="2024-09-16 13:51:18.312445493Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc0ab75a-13a6-415e-a300-d6eb308531be name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:51:18 ha-190751 crio[3513]: time="2024-09-16 13:51:18.312524986Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc0ab75a-13a6-415e-a300-d6eb308531be name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:51:18 ha-190751 crio[3513]: time="2024-09-16 13:51:18.313073996Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68a17947275a885ac1338458515d3c3815f81f0646c7d0f59f5025fbcb246718,PodSandboxId:de7aec1e5e47fa83adcd5bb2d56dac6fedc270654003736649663b10a039a454,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726494585635614611,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f01b81dc-2ff8-41de-8c63-e09a0ead6545,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e02e885f6ff0ac9a43a7b7198e00c6c903eded4e3272b993d0acc2558a5663a,PodSandboxId:8d104a92ea828fa1eda8b699644db2047268468a477a700b9381c6d82c22dfd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726494573633416875,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae495349ac02bb4b5addcdcea0d25715,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d9edc7df5a2360bfab4a65fc63e9ce882e388f183f784c2b4e126b6614717bb,PodSandboxId:99daa5073d1f26dc7805036aa4f09150d16ea0b02f568549c7f09f415301efb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726494543639297101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a42ea5903905c847366e72d48200db,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98b64476badf43fdcb45977d256ecaf7ffa42fe0af3392b766efd09e5fac748c,PodSandboxId:de7aec1e5e47fa83adcd5bb2d56dac6fedc270654003736649663b10a039a454,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726494534634372815,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f01b81dc-2ff8-41de-8c63-e09a0ead6545,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f8a3dd5830446e4f24811190e3c69e2d5d610cf3df928a2966bd194e75a531,PodSandboxId:467454c1433729c870ed967292fa6888f5f05a6a9eeb462cbf661e9f65239b97,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726494530923508312,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lsqcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa0c38d7-fa7a-4b02-b417-1da8e210cc78,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c8c831977cdcd20f220c683022ec7858cf50dbcd786c60fdc6155f6bc7eb81,PodSandboxId:8d104a92ea828fa1eda8b699644db2047268468a477a700b9381c6d82c22dfd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726494530355993292,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae495349ac02bb4b5addcdcea0d25715,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fbd2ab36bd424a75a296c4c526c16aacceafc9d6282b06814fe7cf0b04a119,PodSandboxId:21a6ffa293c766ba33093469c1a549a649e2513f0ddd412dd71fb4efeab0a89e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726494514007319937,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67e8e356afe10ade9e2bb9eb90e11528,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aff424a1e0a36fa17a28c65bed39b131cd77e229d6a5125231b41cedffa463c9,PodSandboxId:2f8de5f3a3283ad6da2d0d331b8846f49c7ff1f7d6346ebee487ad3da80c0874,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726494497835612345,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gpb96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb699362-acf1-471c-8b39-8a7498a7da52,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:a2d90a9f3454116e0b4e843df06f66a7c76a063bd4dbea0132cc1d935208c271,PodSandboxId:2bf212396f86e717f6511c6f01c2db4695c6c9be6d50491e90c14829e49b1091,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726494497876326013,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9lw8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db48b82a19ccdd1356d28e98d010f8a3d3f0927a86408f5d028b655b376dc8ad,PodSandboxId:474617d62056c3c002f48a7616263236dc22f673d5d12a09335e8a051dcc7081,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726494497772321052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gzkpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0ada83-1020-4bd4-be70-9a1a5972ff59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{
\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f88a0c0f7b2943ff725145bc499f835202476a9fca62dec354a893db03f49b8f,PodSandboxId:b0afa0ba4326dc04f1f27aea371594cab14a18fb3648ea6e23bd80bdd0002769,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726494497609209176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-190
751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9922f803bd7b5d0ba2ffa0c06886b9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9d5a75c9054b2414f4c5763f394eb7a72f95e0360e67bf55e3b3ded96ccbd6e,PodSandboxId:379b8517e1b921e22e276aa378b254b916bd475f3463191fcc4436f52ece84b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726494497466465541,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9d7kt,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: ba8c34d1-5931-4e70-8d01-798817397f78,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56e43e1330c7a560e64d2d1d8d2047c7993487a6de8d12b05d4867bc2484e09d,PodSandboxId:f99d150e833d7697431bb77854b9749f955d6285cef08623d43511416ae3f61d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726494497538375864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2cc73ce1a8f746
d45b3276bee469d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d509e2938a032069254cbcb0c924947c72c27bc23984e04701fbe6caef46adad,PodSandboxId:99daa5073d1f26dc7805036aa4f09150d16ea0b02f568549c7f09f415301efb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726494497414585430,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a42ea5903905c847366e72d48200db,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ff16b4cf488d896605be284a1159f722aa4cc147bb74a8eeaf47bee3912ead0,PodSandboxId:70804a075dc34bfcfcd945e41bc9b9b50887dfbed8832df3453a49df237f3a10,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726494009959655688,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lsqcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa0c38d7-fa7a-4b02-b417-1da8e210cc78,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5597ff6fa9128f07d2dc3f058b9b448395d0989aa657629ef5c6819b33cc8cb7,PodSandboxId:faf5324ae84ec325360c692d7e663f4a36e234c8403a4e72f80d57211acd5a2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726493905851195361,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9lw8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e33b03d2f6fce87730d338d716b579f61fa7dca1205bac35abaf88257659f781,PodSandboxId:d74b47a92fc73e9c9e0646cddd475b1d9c4c084abec46863815d97b0f05bd238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726493905853300694,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gzkpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0ada83-1020-4bd4-be70-9a1a5972ff59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2fb4efd07b928023ce922b08d4d29585e3080441cdb212649ac1338243874ee,PodSandboxId:e227eb76eed28456da60c41632338b32cbb3ec7c34407c7745860a265438ce7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726493863271605664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9d7kt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba8c34d1-5931-4e70-8d01-798817397f78,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c9f45c384802a996dd22d917975d86b875cbde33520b6bfb8ec6f84b39629,PodSandboxId:06c5005bbb7151b021f0bc1b7f3e8818b673f7067ec8acf264d4919832abfb8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726493862131373043,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gpb96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb699362-acf1-471c-8b39-8a7498a7da52,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd93f6d25b96fcafeadbe4368203439d003e6e60832e2405318039bac48cd90,PodSandboxId:235857e1be3ea44c435d98b63c4e4bf947b816eb9121b4867264d82144ce5cc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726493850593271561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2cc73ce1a8f746d45b3276bee469d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2fdc916e364191824e8eeeeebd2bd4bde311ec642553730ff1fa83d5ae6b3c,PodSandboxId:2b68d5be2f2cfa03aea5cc5c13039a8c244e9a8260f12dd48010acb6164d6332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726493850404356901,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9922f803bd7b5d0ba2ffa0c06886b9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc0ab75a-13a6-415e-a300-d6eb308531be name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:51:18 ha-190751 crio[3513]: time="2024-09-16 13:51:18.365368443Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=862045a6-09d8-49e2-8d34-fcbb2642ea42 name=/runtime.v1.RuntimeService/Version
	Sep 16 13:51:18 ha-190751 crio[3513]: time="2024-09-16 13:51:18.365489612Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=862045a6-09d8-49e2-8d34-fcbb2642ea42 name=/runtime.v1.RuntimeService/Version
	Sep 16 13:51:18 ha-190751 crio[3513]: time="2024-09-16 13:51:18.367129082Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eec85fae-2507-482b-9cbb-7206d27b4f05 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:51:18 ha-190751 crio[3513]: time="2024-09-16 13:51:18.367687977Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494678367659122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eec85fae-2507-482b-9cbb-7206d27b4f05 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:51:18 ha-190751 crio[3513]: time="2024-09-16 13:51:18.368740485Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5aa20257-2d9b-4fba-9675-e05ae4948194 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:51:18 ha-190751 crio[3513]: time="2024-09-16 13:51:18.368816630Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5aa20257-2d9b-4fba-9675-e05ae4948194 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:51:18 ha-190751 crio[3513]: time="2024-09-16 13:51:18.369517000Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68a17947275a885ac1338458515d3c3815f81f0646c7d0f59f5025fbcb246718,PodSandboxId:de7aec1e5e47fa83adcd5bb2d56dac6fedc270654003736649663b10a039a454,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726494585635614611,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f01b81dc-2ff8-41de-8c63-e09a0ead6545,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e02e885f6ff0ac9a43a7b7198e00c6c903eded4e3272b993d0acc2558a5663a,PodSandboxId:8d104a92ea828fa1eda8b699644db2047268468a477a700b9381c6d82c22dfd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726494573633416875,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae495349ac02bb4b5addcdcea0d25715,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d9edc7df5a2360bfab4a65fc63e9ce882e388f183f784c2b4e126b6614717bb,PodSandboxId:99daa5073d1f26dc7805036aa4f09150d16ea0b02f568549c7f09f415301efb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726494543639297101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a42ea5903905c847366e72d48200db,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98b64476badf43fdcb45977d256ecaf7ffa42fe0af3392b766efd09e5fac748c,PodSandboxId:de7aec1e5e47fa83adcd5bb2d56dac6fedc270654003736649663b10a039a454,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726494534634372815,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f01b81dc-2ff8-41de-8c63-e09a0ead6545,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f8a3dd5830446e4f24811190e3c69e2d5d610cf3df928a2966bd194e75a531,PodSandboxId:467454c1433729c870ed967292fa6888f5f05a6a9eeb462cbf661e9f65239b97,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726494530923508312,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lsqcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa0c38d7-fa7a-4b02-b417-1da8e210cc78,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c8c831977cdcd20f220c683022ec7858cf50dbcd786c60fdc6155f6bc7eb81,PodSandboxId:8d104a92ea828fa1eda8b699644db2047268468a477a700b9381c6d82c22dfd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726494530355993292,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae495349ac02bb4b5addcdcea0d25715,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fbd2ab36bd424a75a296c4c526c16aacceafc9d6282b06814fe7cf0b04a119,PodSandboxId:21a6ffa293c766ba33093469c1a549a649e2513f0ddd412dd71fb4efeab0a89e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726494514007319937,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67e8e356afe10ade9e2bb9eb90e11528,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aff424a1e0a36fa17a28c65bed39b131cd77e229d6a5125231b41cedffa463c9,PodSandboxId:2f8de5f3a3283ad6da2d0d331b8846f49c7ff1f7d6346ebee487ad3da80c0874,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726494497835612345,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gpb96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb699362-acf1-471c-8b39-8a7498a7da52,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:a2d90a9f3454116e0b4e843df06f66a7c76a063bd4dbea0132cc1d935208c271,PodSandboxId:2bf212396f86e717f6511c6f01c2db4695c6c9be6d50491e90c14829e49b1091,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726494497876326013,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9lw8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db48b82a19ccdd1356d28e98d010f8a3d3f0927a86408f5d028b655b376dc8ad,PodSandboxId:474617d62056c3c002f48a7616263236dc22f673d5d12a09335e8a051dcc7081,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726494497772321052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gzkpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0ada83-1020-4bd4-be70-9a1a5972ff59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{
\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f88a0c0f7b2943ff725145bc499f835202476a9fca62dec354a893db03f49b8f,PodSandboxId:b0afa0ba4326dc04f1f27aea371594cab14a18fb3648ea6e23bd80bdd0002769,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726494497609209176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-190
751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9922f803bd7b5d0ba2ffa0c06886b9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9d5a75c9054b2414f4c5763f394eb7a72f95e0360e67bf55e3b3ded96ccbd6e,PodSandboxId:379b8517e1b921e22e276aa378b254b916bd475f3463191fcc4436f52ece84b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726494497466465541,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9d7kt,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: ba8c34d1-5931-4e70-8d01-798817397f78,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56e43e1330c7a560e64d2d1d8d2047c7993487a6de8d12b05d4867bc2484e09d,PodSandboxId:f99d150e833d7697431bb77854b9749f955d6285cef08623d43511416ae3f61d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726494497538375864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2cc73ce1a8f746
d45b3276bee469d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d509e2938a032069254cbcb0c924947c72c27bc23984e04701fbe6caef46adad,PodSandboxId:99daa5073d1f26dc7805036aa4f09150d16ea0b02f568549c7f09f415301efb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726494497414585430,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a42ea5903905c847366e72d48200db,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ff16b4cf488d896605be284a1159f722aa4cc147bb74a8eeaf47bee3912ead0,PodSandboxId:70804a075dc34bfcfcd945e41bc9b9b50887dfbed8832df3453a49df237f3a10,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726494009959655688,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lsqcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa0c38d7-fa7a-4b02-b417-1da8e210cc78,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5597ff6fa9128f07d2dc3f058b9b448395d0989aa657629ef5c6819b33cc8cb7,PodSandboxId:faf5324ae84ec325360c692d7e663f4a36e234c8403a4e72f80d57211acd5a2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726493905851195361,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9lw8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e33b03d2f6fce87730d338d716b579f61fa7dca1205bac35abaf88257659f781,PodSandboxId:d74b47a92fc73e9c9e0646cddd475b1d9c4c084abec46863815d97b0f05bd238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726493905853300694,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gzkpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0ada83-1020-4bd4-be70-9a1a5972ff59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2fb4efd07b928023ce922b08d4d29585e3080441cdb212649ac1338243874ee,PodSandboxId:e227eb76eed28456da60c41632338b32cbb3ec7c34407c7745860a265438ce7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726493863271605664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9d7kt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba8c34d1-5931-4e70-8d01-798817397f78,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c9f45c384802a996dd22d917975d86b875cbde33520b6bfb8ec6f84b39629,PodSandboxId:06c5005bbb7151b021f0bc1b7f3e8818b673f7067ec8acf264d4919832abfb8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726493862131373043,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gpb96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb699362-acf1-471c-8b39-8a7498a7da52,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd93f6d25b96fcafeadbe4368203439d003e6e60832e2405318039bac48cd90,PodSandboxId:235857e1be3ea44c435d98b63c4e4bf947b816eb9121b4867264d82144ce5cc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726493850593271561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2cc73ce1a8f746d45b3276bee469d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2fdc916e364191824e8eeeeebd2bd4bde311ec642553730ff1fa83d5ae6b3c,PodSandboxId:2b68d5be2f2cfa03aea5cc5c13039a8c244e9a8260f12dd48010acb6164d6332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726493850404356901,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9922f803bd7b5d0ba2ffa0c06886b9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5aa20257-2d9b-4fba-9675-e05ae4948194 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	68a17947275a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   de7aec1e5e47f       storage-provisioner
	5e02e885f6ff0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   3                   8d104a92ea828       kube-controller-manager-ha-190751
	8d9edc7df5a23       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Running             kube-apiserver            3                   99daa5073d1f2       kube-apiserver-ha-190751
	98b64476badf4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   de7aec1e5e47f       storage-provisioner
	40f8a3dd58304       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   467454c143372       busybox-7dff88458-lsqcp
	19c8c831977cd       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago        Exited              kube-controller-manager   2                   8d104a92ea828       kube-controller-manager-ha-190751
	46fbd2ab36bd4       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   21a6ffa293c76       kube-vip-ha-190751
	a2d90a9f34541       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      3 minutes ago        Running             coredns                   1                   2bf212396f86e       coredns-7c65d6cfc9-9lw8n
	aff424a1e0a36       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      3 minutes ago        Running             kindnet-cni               1                   2f8de5f3a3283       kindnet-gpb96
	db48b82a19ccd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      3 minutes ago        Running             coredns                   1                   474617d62056c       coredns-7c65d6cfc9-gzkpj
	f88a0c0f7b294       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      3 minutes ago        Running             kube-scheduler            1                   b0afa0ba4326d       kube-scheduler-ha-190751
	56e43e1330c7a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      3 minutes ago        Running             etcd                      1                   f99d150e833d7       etcd-ha-190751
	d9d5a75c9054b       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      3 minutes ago        Running             kube-proxy                1                   379b8517e1b92       kube-proxy-9d7kt
	d509e2938a032       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      3 minutes ago        Exited              kube-apiserver            2                   99daa5073d1f2       kube-apiserver-ha-190751
	1ff16b4cf488d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   70804a075dc34       busybox-7dff88458-lsqcp
	e33b03d2f6fce       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      12 minutes ago       Exited              coredns                   0                   d74b47a92fc73       coredns-7c65d6cfc9-gzkpj
	5597ff6fa9128       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      12 minutes ago       Exited              coredns                   0                   faf5324ae84ec       coredns-7c65d6cfc9-9lw8n
	d2fb4efd07b92       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago       Exited              kube-proxy                0                   e227eb76eed28       kube-proxy-9d7kt
	876c9f45c3848       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      13 minutes ago       Exited              kindnet-cni               0                   06c5005bbb715       kindnet-gpb96
	0cd93f6d25b96       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago       Exited              etcd                      0                   235857e1be3ea       etcd-ha-190751
	3d2fdc916e364       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago       Exited              kube-scheduler            0                   2b68d5be2f2cf       kube-scheduler-ha-190751
	
	
	==> coredns [5597ff6fa9128f07d2dc3f058b9b448395d0989aa657629ef5c6819b33cc8cb7] <==
	[INFO] 10.244.2.2:39675 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165265s
	[INFO] 10.244.2.2:37048 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001066948s
	[INFO] 10.244.2.2:56795 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069535s
	[INFO] 10.244.1.2:57890 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135841s
	[INFO] 10.244.1.2:47650 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001636029s
	[INFO] 10.244.1.2:50206 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099676s
	[INFO] 10.244.1.2:55092 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109421s
	[INFO] 10.244.0.4:53870 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097861s
	[INFO] 10.244.0.4:42443 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000049844s
	[INFO] 10.244.0.4:52687 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057203s
	[INFO] 10.244.2.2:34837 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122205s
	[INFO] 10.244.2.2:39661 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123335s
	[INFO] 10.244.2.2:52074 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080782s
	[INFO] 10.244.1.2:41492 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098139s
	[INFO] 10.244.1.2:49674 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088502s
	[INFO] 10.244.0.4:53518 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000259854s
	[INFO] 10.244.0.4:41118 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000155352s
	[INFO] 10.244.0.4:33823 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000119363s
	[INFO] 10.244.2.2:44582 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000180459s
	[INFO] 10.244.2.2:52118 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000196503s
	[INFO] 10.244.1.2:43708 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011298s
	[INFO] 10.244.1.2:42623 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00011952s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1775&timeout=9m50s&timeoutSeconds=590&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [a2d90a9f3454116e0b4e843df06f66a7c76a063bd4dbea0132cc1d935208c271] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:46722->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1823141026]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 13:48:29.548) (total time: 13378ms):
	Trace[1823141026]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:46722->10.96.0.1:443: read: connection reset by peer 13377ms (13:48:42.926)
	Trace[1823141026]: [13.378031764s] [13.378031764s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:46722->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:45484->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:45484->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [db48b82a19ccdd1356d28e98d010f8a3d3f0927a86408f5d028b655b376dc8ad] <==
	[INFO] plugin/kubernetes: Trace[2045556924]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 13:48:29.278) (total time: 10001ms):
	Trace[2045556924]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:48:39.279)
	Trace[2045556924]: [10.001741529s] [10.001741529s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e33b03d2f6fce87730d338d716b579f61fa7dca1205bac35abaf88257659f781] <==
	[INFO] 10.244.1.2:37179 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001780819s
	[INFO] 10.244.0.4:50469 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000268768s
	[INFO] 10.244.0.4:48039 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000163904s
	[INFO] 10.244.0.4:34482 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084666s
	[INFO] 10.244.0.4:39892 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003221704s
	[INFO] 10.244.0.4:58788 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139358s
	[INFO] 10.244.2.2:57520 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099764s
	[INFO] 10.244.2.2:33023 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000142913s
	[INFO] 10.244.2.2:46886 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000071348s
	[INFO] 10.244.1.2:48181 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120675s
	[INFO] 10.244.1.2:46254 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007984s
	[INFO] 10.244.1.2:51236 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001105782s
	[INFO] 10.244.1.2:43880 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069986s
	[INFO] 10.244.0.4:51480 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109815s
	[INFO] 10.244.2.2:33439 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156091s
	[INFO] 10.244.1.2:40338 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000202214s
	[INFO] 10.244.1.2:41511 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135597s
	[INFO] 10.244.0.4:57318 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142285s
	[INFO] 10.244.2.2:51122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159294s
	[INFO] 10.244.2.2:45477 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00016112s
	[INFO] 10.244.1.2:53140 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015857s
	[INFO] 10.244.1.2:56526 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000182857s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1775&timeout=7m18s&timeoutSeconds=438&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> describe nodes <==
	Name:               ha-190751
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-190751
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86
	                    minikube.k8s.io/name=ha-190751
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T13_37_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 13:37:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-190751
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 13:51:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 13:49:02 +0000   Mon, 16 Sep 2024 13:37:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 13:49:02 +0000   Mon, 16 Sep 2024 13:37:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 13:49:02 +0000   Mon, 16 Sep 2024 13:37:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 13:49:02 +0000   Mon, 16 Sep 2024 13:38:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.94
	  Hostname:    ha-190751
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 413212b342c542b3a63285d76f88cc9f
	  System UUID:                413212b3-42c5-42b3-a632-85d76f88cc9f
	  Boot ID:                    757a1925-23d7-4d65-93ec-732a8b69642f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lsqcp              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-9lw8n             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-7c65d6cfc9-gzkpj             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-190751                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-gpb96                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-190751             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-190751    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-9d7kt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-190751             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-190751                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 2m17s                 kube-proxy       
	  Normal   Starting                 13m                   kube-proxy       
	  Normal   NodeHasSufficientMemory  13m                   kubelet          Node ha-190751 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                   kubelet          Node ha-190751 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                   kubelet          Node ha-190751 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                   node-controller  Node ha-190751 event: Registered Node ha-190751 in Controller
	  Normal   NodeReady                12m                   kubelet          Node ha-190751 status is now: NodeReady
	  Normal   RegisteredNode           12m                   node-controller  Node ha-190751 event: Registered Node ha-190751 in Controller
	  Normal   RegisteredNode           11m                   node-controller  Node ha-190751 event: Registered Node ha-190751 in Controller
	  Warning  ContainerGCFailed        3m39s                 kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             3m5s (x3 over 3m54s)  kubelet          Node ha-190751 status is now: NodeNotReady
	  Normal   RegisteredNode           2m18s                 node-controller  Node ha-190751 event: Registered Node ha-190751 in Controller
	  Normal   RegisteredNode           102s                  node-controller  Node ha-190751 event: Registered Node ha-190751 in Controller
	  Normal   RegisteredNode           38s                   node-controller  Node ha-190751 event: Registered Node ha-190751 in Controller
	
	
	Name:               ha-190751-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-190751-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86
	                    minikube.k8s.io/name=ha-190751
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T13_38_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 13:38:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-190751-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 13:51:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 13:49:46 +0000   Mon, 16 Sep 2024 13:49:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 13:49:46 +0000   Mon, 16 Sep 2024 13:49:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 13:49:46 +0000   Mon, 16 Sep 2024 13:49:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 13:49:46 +0000   Mon, 16 Sep 2024 13:49:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.192
	  Hostname:    ha-190751-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 550acf86555f4901ac21dc9dc8bbc28f
	  System UUID:                550acf86-555f-4901-ac21-dc9dc8bbc28f
	  Boot ID:                    6b926d7d-06da-4813-88d7-fe05ddd773b3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wnt5k                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-190751-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-qfl9j                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-190751-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-190751-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-24q9n                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-190751-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-190751-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 119s                   kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-190751-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-190751-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-190751-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-190751-m02 event: Registered Node ha-190751-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-190751-m02 event: Registered Node ha-190751-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-190751-m02 event: Registered Node ha-190751-m02 in Controller
	  Normal  NodeNotReady             9m13s                  node-controller  Node ha-190751-m02 status is now: NodeNotReady
	  Normal  Starting                 2m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m41s (x8 over 2m41s)  kubelet          Node ha-190751-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m41s (x8 over 2m41s)  kubelet          Node ha-190751-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m41s (x7 over 2m41s)  kubelet          Node ha-190751-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m18s                  node-controller  Node ha-190751-m02 event: Registered Node ha-190751-m02 in Controller
	  Normal  RegisteredNode           102s                   node-controller  Node ha-190751-m02 event: Registered Node ha-190751-m02 in Controller
	  Normal  RegisteredNode           38s                    node-controller  Node ha-190751-m02 event: Registered Node ha-190751-m02 in Controller
	
	
	Name:               ha-190751-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-190751-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86
	                    minikube.k8s.io/name=ha-190751
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T13_39_45_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 13:39:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-190751-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 13:51:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 13:50:52 +0000   Mon, 16 Sep 2024 13:50:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 13:50:52 +0000   Mon, 16 Sep 2024 13:50:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 13:50:52 +0000   Mon, 16 Sep 2024 13:50:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 13:50:52 +0000   Mon, 16 Sep 2024 13:50:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.134
	  Hostname:    ha-190751-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a371c754a93a41bd8e51ba43403aed52
	  System UUID:                a371c754-a93a-41bd-8e51-ba43403aed52
	  Boot ID:                    425b75ea-d37f-42e7-96cf-3bff8401867f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-w6sc6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-190751-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-s7765                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-190751-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-190751-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-9lpwl                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-190751-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-190751-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 42s                kube-proxy       
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-190751-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-190751-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-190751-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-190751-m03 event: Registered Node ha-190751-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-190751-m03 event: Registered Node ha-190751-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-190751-m03 event: Registered Node ha-190751-m03 in Controller
	  Normal   RegisteredNode           2m18s              node-controller  Node ha-190751-m03 event: Registered Node ha-190751-m03 in Controller
	  Normal   RegisteredNode           102s               node-controller  Node ha-190751-m03 event: Registered Node ha-190751-m03 in Controller
	  Normal   NodeNotReady             98s                node-controller  Node ha-190751-m03 status is now: NodeNotReady
	  Normal   Starting                 57s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  57s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 57s (x2 over 57s)  kubelet          Node ha-190751-m03 has been rebooted, boot id: 425b75ea-d37f-42e7-96cf-3bff8401867f
	  Normal   NodeHasSufficientMemory  57s (x3 over 57s)  kubelet          Node ha-190751-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    57s (x3 over 57s)  kubelet          Node ha-190751-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     57s (x3 over 57s)  kubelet          Node ha-190751-m03 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             57s                kubelet          Node ha-190751-m03 status is now: NodeNotReady
	  Normal   NodeReady                57s                kubelet          Node ha-190751-m03 status is now: NodeReady
	  Normal   RegisteredNode           38s                node-controller  Node ha-190751-m03 event: Registered Node ha-190751-m03 in Controller
	
	
	Name:               ha-190751-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-190751-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86
	                    minikube.k8s.io/name=ha-190751
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T13_40_46_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 13:40:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-190751-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 13:51:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 13:51:11 +0000   Mon, 16 Sep 2024 13:51:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 13:51:11 +0000   Mon, 16 Sep 2024 13:51:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 13:51:11 +0000   Mon, 16 Sep 2024 13:51:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 13:51:11 +0000   Mon, 16 Sep 2024 13:51:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-190751-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 99332c0e26304b3097b2fce26060f009
	  System UUID:                99332c0e-2630-4b30-97b2-fce26060f009
	  Boot ID:                    787b425c-db32-4bf7-817c-db14aaf6d08d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-9nmfv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-tk6f6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 3s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-190751-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-190751-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-190751-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-190751-m04 event: Registered Node ha-190751-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-190751-m04 event: Registered Node ha-190751-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-190751-m04 event: Registered Node ha-190751-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-190751-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m18s              node-controller  Node ha-190751-m04 event: Registered Node ha-190751-m04 in Controller
	  Normal   RegisteredNode           102s               node-controller  Node ha-190751-m04 event: Registered Node ha-190751-m04 in Controller
	  Normal   NodeNotReady             98s                node-controller  Node ha-190751-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           38s                node-controller  Node ha-190751-m04 event: Registered Node ha-190751-m04 in Controller
	  Normal   Starting                 7s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  7s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 7s (x2 over 7s)    kubelet          Node ha-190751-m04 has been rebooted, boot id: 787b425c-db32-4bf7-817c-db14aaf6d08d
	  Normal   NodeHasSufficientMemory  7s (x3 over 7s)    kubelet          Node ha-190751-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7s (x3 over 7s)    kubelet          Node ha-190751-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7s (x3 over 7s)    kubelet          Node ha-190751-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             7s                 kubelet          Node ha-190751-m04 status is now: NodeNotReady
	  Normal   NodeReady                7s                 kubelet          Node ha-190751-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.291459] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.062528] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065864] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.157574] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.135658] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.243263] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.876209] systemd-fstab-generator[753]: Ignoring "noauto" option for root device
	[  +4.159219] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.061484] kauditd_printk_skb: 158 callbacks suppressed
	[ +10.191933] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.087738] kauditd_printk_skb: 79 callbacks suppressed
	[Sep16 13:38] kauditd_printk_skb: 69 callbacks suppressed
	[ +12.548550] kauditd_printk_skb: 26 callbacks suppressed
	[Sep16 13:48] systemd-fstab-generator[3437]: Ignoring "noauto" option for root device
	[  +0.154511] systemd-fstab-generator[3449]: Ignoring "noauto" option for root device
	[  +0.174882] systemd-fstab-generator[3463]: Ignoring "noauto" option for root device
	[  +0.138837] systemd-fstab-generator[3475]: Ignoring "noauto" option for root device
	[  +0.280087] systemd-fstab-generator[3503]: Ignoring "noauto" option for root device
	[  +0.689243] systemd-fstab-generator[3600]: Ignoring "noauto" option for root device
	[  +3.674650] kauditd_printk_skb: 122 callbacks suppressed
	[ +12.074803] kauditd_printk_skb: 85 callbacks suppressed
	[ +10.597115] kauditd_printk_skb: 1 callbacks suppressed
	[ +16.272570] kauditd_printk_skb: 10 callbacks suppressed
	[Sep16 13:49] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [0cd93f6d25b96fcafeadbe4368203439d003e6e60832e2405318039bac48cd90] <==
	{"level":"info","ts":"2024-09-16T13:46:40.589998Z","caller":"traceutil/trace.go:171","msg":"trace[1690657794] range","detail":"{range_begin:/registry/limitranges/; range_end:/registry/limitranges0; }","duration":"831.1199ms","start":"2024-09-16T13:46:39.758872Z","end":"2024-09-16T13:46:40.589992Z","steps":["trace[1690657794] 'agreement among raft nodes before linearized reading'  (duration: 824.44903ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T13:46:40.590019Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T13:46:39.758808Z","time spent":"831.202908ms","remote":"127.0.0.1:50254","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" limit:10000 "}
	2024/09/16 13:46:40 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-16T13:46:40.621370Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.94:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T13:46:40.621457Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.94:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T13:46:40.622533Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"c23cd90330b5fc4f","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-16T13:46:40.624102Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"cfda983678b85d00"}
	{"level":"info","ts":"2024-09-16T13:46:40.624201Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"cfda983678b85d00"}
	{"level":"info","ts":"2024-09-16T13:46:40.624224Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"cfda983678b85d00"}
	{"level":"info","ts":"2024-09-16T13:46:40.624643Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00"}
	{"level":"info","ts":"2024-09-16T13:46:40.624746Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00"}
	{"level":"info","ts":"2024-09-16T13:46:40.625118Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00"}
	{"level":"info","ts":"2024-09-16T13:46:40.625157Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"cfda983678b85d00"}
	{"level":"info","ts":"2024-09-16T13:46:40.625233Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"57f8f59559f02f50"}
	{"level":"info","ts":"2024-09-16T13:46:40.625311Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"57f8f59559f02f50"}
	{"level":"info","ts":"2024-09-16T13:46:40.625484Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"57f8f59559f02f50"}
	{"level":"info","ts":"2024-09-16T13:46:40.625602Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c23cd90330b5fc4f","remote-peer-id":"57f8f59559f02f50"}
	{"level":"info","ts":"2024-09-16T13:46:40.625705Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c23cd90330b5fc4f","remote-peer-id":"57f8f59559f02f50"}
	{"level":"info","ts":"2024-09-16T13:46:40.625878Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c23cd90330b5fc4f","remote-peer-id":"57f8f59559f02f50"}
	{"level":"info","ts":"2024-09-16T13:46:40.625931Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"57f8f59559f02f50"}
	{"level":"info","ts":"2024-09-16T13:46:40.632191Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.94:2380"}
	{"level":"warn","ts":"2024-09-16T13:46:40.632287Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.896036216s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-16T13:46:40.632542Z","caller":"traceutil/trace.go:171","msg":"trace[226656043] range","detail":"{range_begin:; range_end:; }","duration":"8.896304507s","start":"2024-09-16T13:46:31.736230Z","end":"2024-09-16T13:46:40.632535Z","steps":["trace[226656043] 'agreement among raft nodes before linearized reading'  (duration: 8.896035567s)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T13:46:40.632463Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.94:2380"}
	{"level":"info","ts":"2024-09-16T13:46:40.632634Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-190751","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.94:2380"],"advertise-client-urls":["https://192.168.39.94:2379"]}
	
	
	==> etcd [56e43e1330c7a560e64d2d1d8d2047c7993487a6de8d12b05d4867bc2484e09d] <==
	{"level":"warn","ts":"2024-09-16T13:50:22.256392Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.134:2380/version","remote-member-id":"57f8f59559f02f50","error":"Get \"https://192.168.39.134:2380/version\": dial tcp 192.168.39.134:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T13:50:22.256464Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"57f8f59559f02f50","error":"Get \"https://192.168.39.134:2380/version\": dial tcp 192.168.39.134:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T13:50:23.360175Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"57f8f59559f02f50","rtt":"0s","error":"dial tcp 192.168.39.134:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T13:50:23.360334Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"57f8f59559f02f50","rtt":"0s","error":"dial tcp 192.168.39.134:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T13:50:26.258735Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.134:2380/version","remote-member-id":"57f8f59559f02f50","error":"Get \"https://192.168.39.134:2380/version\": dial tcp 192.168.39.134:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T13:50:26.259071Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"57f8f59559f02f50","error":"Get \"https://192.168.39.134:2380/version\": dial tcp 192.168.39.134:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T13:50:28.360884Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"57f8f59559f02f50","rtt":"0s","error":"dial tcp 192.168.39.134:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T13:50:28.360965Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"57f8f59559f02f50","rtt":"0s","error":"dial tcp 192.168.39.134:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T13:50:30.261939Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.134:2380/version","remote-member-id":"57f8f59559f02f50","error":"Get \"https://192.168.39.134:2380/version\": dial tcp 192.168.39.134:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T13:50:30.262146Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"57f8f59559f02f50","error":"Get \"https://192.168.39.134:2380/version\": dial tcp 192.168.39.134:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T13:50:32.564929Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.219209ms","expected-duration":"100ms","prefix":"","request":"header:<ID:18180910728332811034 > lease_revoke:<id:7c4f91fb175d9a84>","response":"size:29"}
	{"level":"info","ts":"2024-09-16T13:50:33.147432Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"57f8f59559f02f50"}
	{"level":"info","ts":"2024-09-16T13:50:33.147542Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c23cd90330b5fc4f","remote-peer-id":"57f8f59559f02f50"}
	{"level":"info","ts":"2024-09-16T13:50:33.149149Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"c23cd90330b5fc4f","remote-peer-id":"57f8f59559f02f50"}
	{"level":"info","ts":"2024-09-16T13:50:33.162411Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"c23cd90330b5fc4f","to":"57f8f59559f02f50","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-16T13:50:33.162520Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"c23cd90330b5fc4f","remote-peer-id":"57f8f59559f02f50"}
	{"level":"info","ts":"2024-09-16T13:50:33.164781Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"c23cd90330b5fc4f","to":"57f8f59559f02f50","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-16T13:50:33.164894Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"c23cd90330b5fc4f","remote-peer-id":"57f8f59559f02f50"}
	{"level":"warn","ts":"2024-09-16T13:50:33.361888Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"57f8f59559f02f50","rtt":"0s","error":"dial tcp 192.168.39.134:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T13:50:33.361935Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"57f8f59559f02f50","rtt":"0s","error":"dial tcp 192.168.39.134:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T13:51:14.632021Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.505998ms","expected-duration":"100ms","prefix":"","request":"header:<ID:18180910728332811424 > lease_revoke:<id:2f5091fb196c9d8d>","response":"size:29"}
	{"level":"warn","ts":"2024-09-16T13:51:14.632744Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.1098ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T13:51:14.632817Z","caller":"traceutil/trace.go:171","msg":"trace[218423233] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2542; }","duration":"118.173876ms","start":"2024-09-16T13:51:14.514616Z","end":"2024-09-16T13:51:14.632790Z","steps":["trace[218423233] 'range keys from in-memory index tree'  (duration: 118.1038ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T13:51:16.726921Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.034074ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-ha-190751-m02\" ","response":"range_response_count:1 size:4326"}
	{"level":"info","ts":"2024-09-16T13:51:16.727023Z","caller":"traceutil/trace.go:171","msg":"trace[1118241631] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-ha-190751-m02; range_end:; response_count:1; response_revision:2553; }","duration":"139.235457ms","start":"2024-09-16T13:51:16.587768Z","end":"2024-09-16T13:51:16.727003Z","steps":["trace[1118241631] 'range keys from in-memory index tree'  (duration: 138.102768ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:51:18 up 14 min,  0 users,  load average: 0.10, 0.38, 0.30
	Linux ha-190751 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [876c9f45c384802a996dd22d917975d86b875cbde33520b6bfb8ec6f84b39629] <==
	I0916 13:46:03.329387       1 main.go:322] Node ha-190751-m02 has CIDR [10.244.1.0/24] 
	I0916 13:46:13.330372       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0916 13:46:13.330536       1 main.go:299] handling current node
	I0916 13:46:13.330576       1 main.go:295] Handling node with IPs: map[192.168.39.192:{}]
	I0916 13:46:13.330597       1 main.go:322] Node ha-190751-m02 has CIDR [10.244.1.0/24] 
	I0916 13:46:13.330748       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0916 13:46:13.330770       1 main.go:322] Node ha-190751-m03 has CIDR [10.244.2.0/24] 
	I0916 13:46:13.330910       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0916 13:46:13.330937       1 main.go:322] Node ha-190751-m04 has CIDR [10.244.3.0/24] 
	I0916 13:46:23.330925       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0916 13:46:23.331009       1 main.go:299] handling current node
	I0916 13:46:23.331037       1 main.go:295] Handling node with IPs: map[192.168.39.192:{}]
	I0916 13:46:23.331055       1 main.go:322] Node ha-190751-m02 has CIDR [10.244.1.0/24] 
	I0916 13:46:23.331212       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0916 13:46:23.331233       1 main.go:322] Node ha-190751-m03 has CIDR [10.244.2.0/24] 
	I0916 13:46:23.331286       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0916 13:46:23.331304       1 main.go:322] Node ha-190751-m04 has CIDR [10.244.3.0/24] 
	I0916 13:46:33.331747       1 main.go:295] Handling node with IPs: map[192.168.39.192:{}]
	I0916 13:46:33.331911       1 main.go:322] Node ha-190751-m02 has CIDR [10.244.1.0/24] 
	I0916 13:46:33.332076       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0916 13:46:33.332145       1 main.go:322] Node ha-190751-m03 has CIDR [10.244.2.0/24] 
	I0916 13:46:33.332275       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0916 13:46:33.332300       1 main.go:322] Node ha-190751-m04 has CIDR [10.244.3.0/24] 
	I0916 13:46:33.332372       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0916 13:46:33.332391       1 main.go:299] handling current node
	
	
	==> kindnet [aff424a1e0a36fa17a28c65bed39b131cd77e229d6a5125231b41cedffa463c9] <==
	I0916 13:50:39.173229       1 main.go:322] Node ha-190751-m04 has CIDR [10.244.3.0/24] 
	I0916 13:50:49.170045       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0916 13:50:49.170159       1 main.go:322] Node ha-190751-m04 has CIDR [10.244.3.0/24] 
	I0916 13:50:49.170309       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0916 13:50:49.170417       1 main.go:299] handling current node
	I0916 13:50:49.170449       1 main.go:295] Handling node with IPs: map[192.168.39.192:{}]
	I0916 13:50:49.170527       1 main.go:322] Node ha-190751-m02 has CIDR [10.244.1.0/24] 
	I0916 13:50:49.170610       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0916 13:50:49.170628       1 main.go:322] Node ha-190751-m03 has CIDR [10.244.2.0/24] 
	I0916 13:50:59.172165       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0916 13:50:59.172259       1 main.go:322] Node ha-190751-m04 has CIDR [10.244.3.0/24] 
	I0916 13:50:59.172417       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0916 13:50:59.172714       1 main.go:299] handling current node
	I0916 13:50:59.172761       1 main.go:295] Handling node with IPs: map[192.168.39.192:{}]
	I0916 13:50:59.172931       1 main.go:322] Node ha-190751-m02 has CIDR [10.244.1.0/24] 
	I0916 13:50:59.173094       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0916 13:50:59.173258       1 main.go:322] Node ha-190751-m03 has CIDR [10.244.2.0/24] 
	I0916 13:51:09.170224       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0916 13:51:09.170276       1 main.go:299] handling current node
	I0916 13:51:09.170294       1 main.go:295] Handling node with IPs: map[192.168.39.192:{}]
	I0916 13:51:09.170303       1 main.go:322] Node ha-190751-m02 has CIDR [10.244.1.0/24] 
	I0916 13:51:09.170516       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0916 13:51:09.170567       1 main.go:322] Node ha-190751-m03 has CIDR [10.244.2.0/24] 
	I0916 13:51:09.170677       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0916 13:51:09.170717       1 main.go:322] Node ha-190751-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8d9edc7df5a2360bfab4a65fc63e9ce882e388f183f784c2b4e126b6614717bb] <==
	I0916 13:49:05.790481       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0916 13:49:05.790662       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 13:49:05.880151       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 13:49:05.880184       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 13:49:05.881399       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 13:49:05.881681       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 13:49:05.884128       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 13:49:05.884200       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 13:49:05.884316       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 13:49:05.897919       1 aggregator.go:171] initial CRD sync complete...
	I0916 13:49:05.897965       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 13:49:05.897971       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 13:49:05.897976       1 cache.go:39] Caches are synced for autoregister controller
	I0916 13:49:05.906509       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 13:49:05.907397       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 13:49:05.913327       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 13:49:05.920996       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 13:49:05.921032       1 policy_source.go:224] refreshing policies
	W0916 13:49:05.924737       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.134 192.168.39.192]
	I0916 13:49:05.926537       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 13:49:05.939384       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0916 13:49:05.942540       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0916 13:49:06.006891       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 13:49:06.789288       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0916 13:49:07.056917       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.192 192.168.39.94]
	
	
	==> kube-apiserver [d509e2938a032069254cbcb0c924947c72c27bc23984e04701fbe6caef46adad] <==
	I0916 13:48:18.240271       1 options.go:228] external host was not specified, using 192.168.39.94
	I0916 13:48:18.244399       1 server.go:142] Version: v1.31.1
	I0916 13:48:18.244462       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 13:48:19.289487       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0916 13:48:19.300011       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 13:48:19.303912       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0916 13:48:19.305870       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0916 13:48:19.306259       1 instance.go:232] Using reconciler: lease
	W0916 13:48:39.288638       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0916 13:48:39.288639       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0916 13:48:39.306944       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [19c8c831977cdcd20f220c683022ec7858cf50dbcd786c60fdc6155f6bc7eb81] <==
	I0916 13:48:50.770107       1 serving.go:386] Generated self-signed cert in-memory
	I0916 13:48:51.051966       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0916 13:48:51.052003       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 13:48:51.053274       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 13:48:51.053502       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0916 13:48:51.053509       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0916 13:48:51.053529       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0916 13:49:01.055901       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.94:8443/healthz\": dial tcp 192.168.39.94:8443: connect: connection refused"
	
	
	==> kube-controller-manager [5e02e885f6ff0ac9a43a7b7198e00c6c903eded4e3272b993d0acc2558a5663a] <==
	I0916 13:49:40.871016       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m03"
	I0916 13:49:41.005050       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="55.830997ms"
	I0916 13:49:41.005173       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="46.614µs"
	I0916 13:49:41.062884       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m03"
	I0916 13:49:45.651071       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-m8pnj EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-m8pnj\": the object has been modified; please apply your changes to the latest version and try again"
	I0916 13:49:45.652061       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"3d0e0f18-9130-402d-8357-2082256958d5", APIVersion:"v1", ResourceVersion:"261", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-m8pnj EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-m8pnj": the object has been modified; please apply your changes to the latest version and try again
	I0916 13:49:45.676056       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="83.892483ms"
	I0916 13:49:45.676174       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="69.335µs"
	I0916 13:49:46.035249       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m02"
	I0916 13:49:46.104376       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m03"
	I0916 13:49:51.148651       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:49:56.182090       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:50:21.950511       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m03"
	I0916 13:50:21.972271       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m03"
	I0916 13:50:22.883087       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="56.68µs"
	I0916 13:50:26.027246       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m03"
	I0916 13:50:38.350381       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="27.317005ms"
	I0916 13:50:38.350567       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="65.466µs"
	I0916 13:50:40.935488       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:50:40.996079       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:50:52.540167       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m03"
	I0916 13:51:11.332234       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:51:11.332311       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-190751-m04"
	I0916 13:51:11.349416       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:51:15.956477       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	
	
	==> kube-proxy [d2fb4efd07b928023ce922b08d4d29585e3080441cdb212649ac1338243874ee] <==
	E0916 13:45:35.743342       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 13:45:38.799129       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-190751&resourceVersion=1809": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 13:45:38.799328       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-190751&resourceVersion=1809\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 13:45:38.799503       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 13:45:38.799575       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 13:45:38.799697       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 13:45:38.799777       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 13:45:44.944023       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-190751&resourceVersion=1809": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 13:45:44.944104       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-190751&resourceVersion=1809\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 13:45:44.944198       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 13:45:44.944232       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 13:45:44.944378       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 13:45:44.944415       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 13:45:57.230707       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 13:45:57.231068       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 13:45:57.231306       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-190751&resourceVersion=1809": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 13:45:57.231398       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-190751&resourceVersion=1809\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 13:46:00.302389       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 13:46:00.302579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 13:46:18.735179       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-190751&resourceVersion=1809": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 13:46:18.735288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-190751&resourceVersion=1809\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 13:46:21.807716       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 13:46:21.808160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 13:46:24.878689       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 13:46:24.878920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [d9d5a75c9054b2414f4c5763f394eb7a72f95e0360e67bf55e3b3ded96ccbd6e] <==
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 13:48:21.615188       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-190751\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0916 13:48:24.688312       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-190751\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0916 13:48:27.758262       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-190751\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0916 13:48:33.902899       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-190751\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0916 13:48:43.118561       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-190751\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0916 13:49:01.550524       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-190751\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0916 13:49:01.550585       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0916 13:49:01.550651       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 13:49:01.585204       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 13:49:01.585283       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 13:49:01.585308       1 server_linux.go:169] "Using iptables Proxier"
	I0916 13:49:01.587678       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 13:49:01.588079       1 server.go:483] "Version info" version="v1.31.1"
	I0916 13:49:01.588114       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 13:49:01.590228       1 config.go:199] "Starting service config controller"
	I0916 13:49:01.590270       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 13:49:01.590292       1 config.go:105] "Starting endpoint slice config controller"
	I0916 13:49:01.590295       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 13:49:01.590878       1 config.go:328] "Starting node config controller"
	I0916 13:49:01.590904       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 13:49:03.590698       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 13:49:03.590763       1 shared_informer.go:320] Caches are synced for service config
	I0916 13:49:03.590976       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3d2fdc916e364191824e8eeeeebd2bd4bde311ec642553730ff1fa83d5ae6b3c] <==
	E0916 13:37:35.232941       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 13:37:36.647896       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 13:40:46.111447       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-v4ngc\": pod kube-proxy-v4ngc is already assigned to node \"ha-190751-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-v4ngc" node="ha-190751-m04"
	E0916 13:40:46.111635       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1bfac972-00f2-440b-8577-132ebf2ef8fa(kube-system/kube-proxy-v4ngc) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-v4ngc"
	E0916 13:40:46.111674       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-v4ngc\": pod kube-proxy-v4ngc is already assigned to node \"ha-190751-m04\"" pod="kube-system/kube-proxy-v4ngc"
	I0916 13:40:46.111701       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-v4ngc" node="ha-190751-m04"
	E0916 13:40:46.136509       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-9nmfv\": pod kindnet-9nmfv is already assigned to node \"ha-190751-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-9nmfv" node="ha-190751-m04"
	E0916 13:40:46.136581       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a53af4e2-ffdc-4e32-8f97-f0b2684145be(kube-system/kindnet-9nmfv) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-9nmfv"
	E0916 13:40:46.136599       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-9nmfv\": pod kindnet-9nmfv is already assigned to node \"ha-190751-m04\"" pod="kube-system/kindnet-9nmfv"
	I0916 13:40:46.136617       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-9nmfv" node="ha-190751-m04"
	E0916 13:46:32.299016       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0916 13:46:32.673881       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0916 13:46:33.207918       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0916 13:46:34.163140       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0916 13:46:34.525037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0916 13:46:35.918803       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0916 13:46:35.998358       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0916 13:46:36.664235       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0916 13:46:37.506907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0916 13:46:38.862817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0916 13:46:39.672670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0916 13:46:39.958770       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	I0916 13:46:40.565675       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0916 13:46:40.565744       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0916 13:46:40.567398       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f88a0c0f7b2943ff725145bc499f835202476a9fca62dec354a893db03f49b8f] <==
	W0916 13:48:57.716581       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.94:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.94:8443: connect: connection refused
	E0916 13:48:57.716686       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.94:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.94:8443: connect: connection refused" logger="UnhandledError"
	W0916 13:48:58.088358       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.94:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.94:8443: connect: connection refused
	E0916 13:48:58.088427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.94:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.94:8443: connect: connection refused" logger="UnhandledError"
	W0916 13:48:58.600434       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.94:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.94:8443: connect: connection refused
	E0916 13:48:58.600521       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.94:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.94:8443: connect: connection refused" logger="UnhandledError"
	W0916 13:48:58.987804       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.94:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.94:8443: connect: connection refused
	E0916 13:48:58.987943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.94:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.94:8443: connect: connection refused" logger="UnhandledError"
	W0916 13:48:59.738158       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.94:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.94:8443: connect: connection refused
	E0916 13:48:59.738266       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.94:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.94:8443: connect: connection refused" logger="UnhandledError"
	W0916 13:49:00.042343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.94:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.94:8443: connect: connection refused
	E0916 13:49:00.042416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.94:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.94:8443: connect: connection refused" logger="UnhandledError"
	W0916 13:49:00.269741       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.94:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.94:8443: connect: connection refused
	E0916 13:49:00.269942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.94:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.94:8443: connect: connection refused" logger="UnhandledError"
	W0916 13:49:00.515438       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.94:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.94:8443: connect: connection refused
	E0916 13:49:00.515502       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.94:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.94:8443: connect: connection refused" logger="UnhandledError"
	W0916 13:49:00.627437       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.94:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.94:8443: connect: connection refused
	E0916 13:49:00.627513       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.94:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.94:8443: connect: connection refused" logger="UnhandledError"
	W0916 13:49:01.322481       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.94:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.94:8443: connect: connection refused
	E0916 13:49:01.322544       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.94:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.94:8443: connect: connection refused" logger="UnhandledError"
	W0916 13:49:01.516424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.94:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.94:8443: connect: connection refused
	E0916 13:49:01.516482       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.94:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.94:8443: connect: connection refused" logger="UnhandledError"
	W0916 13:49:02.546791       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.94:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.94:8443: connect: connection refused
	E0916 13:49:02.546911       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.94:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.94:8443: connect: connection refused" logger="UnhandledError"
	I0916 13:49:18.222793       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 13:49:59 ha-190751 kubelet[1315]: E0916 13:49:59.837769    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494599837023236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:49:59 ha-190751 kubelet[1315]: E0916 13:49:59.837790    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494599837023236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:50:06 ha-190751 kubelet[1315]: I0916 13:50:06.621075    1315 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-190751" podUID="d979d6e0-d0db-4fe1-a8e7-d8e361f20a88"
	Sep 16 13:50:06 ha-190751 kubelet[1315]: I0916 13:50:06.643716    1315 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-190751"
	Sep 16 13:50:07 ha-190751 kubelet[1315]: I0916 13:50:07.359756    1315 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-190751" podUID="d979d6e0-d0db-4fe1-a8e7-d8e361f20a88"
	Sep 16 13:50:09 ha-190751 kubelet[1315]: I0916 13:50:09.635814    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-190751" podStartSLOduration=3.635782493 podStartE2EDuration="3.635782493s" podCreationTimestamp="2024-09-16 13:50:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 13:50:09.635538404 +0000 UTC m=+750.162102650" watchObservedRunningTime="2024-09-16 13:50:09.635782493 +0000 UTC m=+750.162346757"
	Sep 16 13:50:09 ha-190751 kubelet[1315]: E0916 13:50:09.840645    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494609839486380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:50:09 ha-190751 kubelet[1315]: E0916 13:50:09.840808    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494609839486380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:50:19 ha-190751 kubelet[1315]: E0916 13:50:19.843366    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494619842636614,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:50:19 ha-190751 kubelet[1315]: E0916 13:50:19.843628    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494619842636614,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:50:29 ha-190751 kubelet[1315]: E0916 13:50:29.845362    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494629844992379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:50:29 ha-190751 kubelet[1315]: E0916 13:50:29.845914    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494629844992379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:50:39 ha-190751 kubelet[1315]: E0916 13:50:39.651346    1315 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 13:50:39 ha-190751 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 13:50:39 ha-190751 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 13:50:39 ha-190751 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 13:50:39 ha-190751 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 13:50:39 ha-190751 kubelet[1315]: E0916 13:50:39.849565    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494639847784518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:50:39 ha-190751 kubelet[1315]: E0916 13:50:39.849635    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494639847784518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:50:49 ha-190751 kubelet[1315]: E0916 13:50:49.854205    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494649853695373,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:50:49 ha-190751 kubelet[1315]: E0916 13:50:49.854257    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494649853695373,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:50:59 ha-190751 kubelet[1315]: E0916 13:50:59.856037    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494659855479987,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:50:59 ha-190751 kubelet[1315]: E0916 13:50:59.856071    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494659855479987,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:51:09 ha-190751 kubelet[1315]: E0916 13:51:09.858787    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494669858480255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:51:09 ha-190751 kubelet[1315]: E0916 13:51:09.858891    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494669858480255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 13:51:17.906274  742642 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19652-713072/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-190751 -n ha-190751
helpers_test.go:261: (dbg) Run:  kubectl --context ha-190751 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (402.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-190751 stop -v=7 --alsologtostderr: exit status 82 (2m0.465260257s)

                                                
                                                
-- stdout --
	* Stopping node "ha-190751-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 13:51:36.989116  743052 out.go:345] Setting OutFile to fd 1 ...
	I0916 13:51:36.989223  743052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:51:36.989231  743052 out.go:358] Setting ErrFile to fd 2...
	I0916 13:51:36.989235  743052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:51:36.989422  743052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
	I0916 13:51:36.989689  743052 out.go:352] Setting JSON to false
	I0916 13:51:36.989782  743052 mustload.go:65] Loading cluster: ha-190751
	I0916 13:51:36.990171  743052 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:51:36.990253  743052 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/config.json ...
	I0916 13:51:36.990425  743052 mustload.go:65] Loading cluster: ha-190751
	I0916 13:51:36.990551  743052 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:51:36.990573  743052 stop.go:39] StopHost: ha-190751-m04
	I0916 13:51:36.990958  743052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:51:36.990996  743052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:51:37.006052  743052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37975
	I0916 13:51:37.006584  743052 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:51:37.007142  743052 main.go:141] libmachine: Using API Version  1
	I0916 13:51:37.007164  743052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:51:37.007585  743052 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:51:37.009900  743052 out.go:177] * Stopping node "ha-190751-m04"  ...
	I0916 13:51:37.010893  743052 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0916 13:51:37.010924  743052 main.go:141] libmachine: (ha-190751-m04) Calling .DriverName
	I0916 13:51:37.011190  743052 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0916 13:51:37.011230  743052 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHHostname
	I0916 13:51:37.014665  743052 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:51:37.015150  743052 main.go:141] libmachine: (ha-190751-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c5:44", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:51:05 +0000 UTC Type:0 Mac:52:54:00:46:c5:44 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-190751-m04 Clientid:01:52:54:00:46:c5:44}
	I0916 13:51:37.015185  743052 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:51:37.015305  743052 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHPort
	I0916 13:51:37.015487  743052 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHKeyPath
	I0916 13:51:37.015625  743052 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHUsername
	I0916 13:51:37.015750  743052 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m04/id_rsa Username:docker}
	I0916 13:51:37.102292  743052 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0916 13:51:37.157003  743052 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0916 13:51:37.211054  743052 main.go:141] libmachine: Stopping "ha-190751-m04"...
	I0916 13:51:37.211097  743052 main.go:141] libmachine: (ha-190751-m04) Calling .GetState
	I0916 13:51:37.212638  743052 main.go:141] libmachine: (ha-190751-m04) Calling .Stop
	I0916 13:51:37.216964  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 0/120
	I0916 13:51:38.218242  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 1/120
	I0916 13:51:39.219602  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 2/120
	I0916 13:51:40.221090  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 3/120
	I0916 13:51:41.222424  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 4/120
	I0916 13:51:42.224479  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 5/120
	I0916 13:51:43.225959  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 6/120
	I0916 13:51:44.228187  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 7/120
	I0916 13:51:45.229469  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 8/120
	I0916 13:51:46.230888  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 9/120
	I0916 13:51:47.233123  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 10/120
	I0916 13:51:48.234794  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 11/120
	I0916 13:51:49.236294  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 12/120
	I0916 13:51:50.237622  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 13/120
	I0916 13:51:51.238874  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 14/120
	I0916 13:51:52.240821  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 15/120
	I0916 13:51:53.242310  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 16/120
	I0916 13:51:54.244057  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 17/120
	I0916 13:51:55.245475  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 18/120
	I0916 13:51:56.246936  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 19/120
	I0916 13:51:57.249157  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 20/120
	I0916 13:51:58.250535  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 21/120
	I0916 13:51:59.251886  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 22/120
	I0916 13:52:00.253359  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 23/120
	I0916 13:52:01.254803  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 24/120
	I0916 13:52:02.256799  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 25/120
	I0916 13:52:03.258073  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 26/120
	I0916 13:52:04.260288  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 27/120
	I0916 13:52:05.261656  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 28/120
	I0916 13:52:06.263058  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 29/120
	I0916 13:52:07.264587  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 30/120
	I0916 13:52:08.265920  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 31/120
	I0916 13:52:09.268229  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 32/120
	I0916 13:52:10.270439  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 33/120
	I0916 13:52:11.271707  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 34/120
	I0916 13:52:12.273488  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 35/120
	I0916 13:52:13.275265  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 36/120
	I0916 13:52:14.276394  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 37/120
	I0916 13:52:15.277565  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 38/120
	I0916 13:52:16.278891  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 39/120
	I0916 13:52:17.281150  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 40/120
	I0916 13:52:18.282318  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 41/120
	I0916 13:52:19.284430  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 42/120
	I0916 13:52:20.285623  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 43/120
	I0916 13:52:21.286812  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 44/120
	I0916 13:52:22.288616  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 45/120
	I0916 13:52:23.289923  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 46/120
	I0916 13:52:24.291064  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 47/120
	I0916 13:52:25.292541  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 48/120
	I0916 13:52:26.294390  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 49/120
	I0916 13:52:27.296322  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 50/120
	I0916 13:52:28.297686  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 51/120
	I0916 13:52:29.298858  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 52/120
	I0916 13:52:30.300093  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 53/120
	I0916 13:52:31.301225  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 54/120
	I0916 13:52:32.303450  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 55/120
	I0916 13:52:33.304952  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 56/120
	I0916 13:52:34.306385  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 57/120
	I0916 13:52:35.308139  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 58/120
	I0916 13:52:36.309352  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 59/120
	I0916 13:52:37.311301  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 60/120
	I0916 13:52:38.312744  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 61/120
	I0916 13:52:39.314693  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 62/120
	I0916 13:52:40.316108  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 63/120
	I0916 13:52:41.317282  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 64/120
	I0916 13:52:42.318972  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 65/120
	I0916 13:52:43.320253  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 66/120
	I0916 13:52:44.321901  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 67/120
	I0916 13:52:45.323035  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 68/120
	I0916 13:52:46.324192  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 69/120
	I0916 13:52:47.326056  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 70/120
	I0916 13:52:48.327315  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 71/120
	I0916 13:52:49.328723  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 72/120
	I0916 13:52:50.329928  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 73/120
	I0916 13:52:51.331199  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 74/120
	I0916 13:52:52.333028  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 75/120
	I0916 13:52:53.334425  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 76/120
	I0916 13:52:54.335822  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 77/120
	I0916 13:52:55.337613  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 78/120
	I0916 13:52:56.338956  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 79/120
	I0916 13:52:57.340945  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 80/120
	I0916 13:52:58.342238  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 81/120
	I0916 13:52:59.343846  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 82/120
	I0916 13:53:00.345155  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 83/120
	I0916 13:53:01.346706  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 84/120
	I0916 13:53:02.348630  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 85/120
	I0916 13:53:03.350469  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 86/120
	I0916 13:53:04.352137  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 87/120
	I0916 13:53:05.353402  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 88/120
	I0916 13:53:06.354943  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 89/120
	I0916 13:53:07.356851  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 90/120
	I0916 13:53:08.358796  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 91/120
	I0916 13:53:09.360383  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 92/120
	I0916 13:53:10.361756  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 93/120
	I0916 13:53:11.363002  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 94/120
	I0916 13:53:12.364944  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 95/120
	I0916 13:53:13.366433  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 96/120
	I0916 13:53:14.367971  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 97/120
	I0916 13:53:15.369387  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 98/120
	I0916 13:53:16.371077  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 99/120
	I0916 13:53:17.373282  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 100/120
	I0916 13:53:18.374648  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 101/120
	I0916 13:53:19.376158  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 102/120
	I0916 13:53:20.378422  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 103/120
	I0916 13:53:21.380413  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 104/120
	I0916 13:53:22.382556  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 105/120
	I0916 13:53:23.384876  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 106/120
	I0916 13:53:24.386201  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 107/120
	I0916 13:53:25.387419  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 108/120
	I0916 13:53:26.388691  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 109/120
	I0916 13:53:27.390688  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 110/120
	I0916 13:53:28.392110  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 111/120
	I0916 13:53:29.393375  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 112/120
	I0916 13:53:30.394633  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 113/120
	I0916 13:53:31.396027  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 114/120
	I0916 13:53:32.397867  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 115/120
	I0916 13:53:33.399169  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 116/120
	I0916 13:53:34.400375  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 117/120
	I0916 13:53:35.401664  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 118/120
	I0916 13:53:36.402934  743052 main.go:141] libmachine: (ha-190751-m04) Waiting for machine to stop 119/120
	I0916 13:53:37.403502  743052 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0916 13:53:37.403583  743052 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0916 13:53:37.405432  743052 out.go:201] 
	W0916 13:53:37.406573  743052 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0916 13:53:37.406588  743052 out.go:270] * 
	* 
	W0916 13:53:37.409828  743052 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 13:53:37.411122  743052 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-190751 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-190751 status -v=7 --alsologtostderr: exit status 3 (18.977525528s)

                                                
                                                
-- stdout --
	ha-190751
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-190751-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-190751-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 13:53:37.458953  743496 out.go:345] Setting OutFile to fd 1 ...
	I0916 13:53:37.459214  743496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:53:37.459224  743496 out.go:358] Setting ErrFile to fd 2...
	I0916 13:53:37.459229  743496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:53:37.459548  743496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
	I0916 13:53:37.459789  743496 out.go:352] Setting JSON to false
	I0916 13:53:37.459837  743496 mustload.go:65] Loading cluster: ha-190751
	I0916 13:53:37.459962  743496 notify.go:220] Checking for updates...
	I0916 13:53:37.460449  743496 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:53:37.460475  743496 status.go:255] checking status of ha-190751 ...
	I0916 13:53:37.461017  743496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:53:37.461061  743496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:53:37.485236  743496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42099
	I0916 13:53:37.485883  743496 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:53:37.486568  743496 main.go:141] libmachine: Using API Version  1
	I0916 13:53:37.486596  743496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:53:37.487057  743496 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:53:37.487287  743496 main.go:141] libmachine: (ha-190751) Calling .GetState
	I0916 13:53:37.488957  743496 status.go:330] ha-190751 host status = "Running" (err=<nil>)
	I0916 13:53:37.488977  743496 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:53:37.489234  743496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:53:37.489272  743496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:53:37.503514  743496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44609
	I0916 13:53:37.503933  743496 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:53:37.504427  743496 main.go:141] libmachine: Using API Version  1
	I0916 13:53:37.504463  743496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:53:37.504768  743496 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:53:37.504976  743496 main.go:141] libmachine: (ha-190751) Calling .GetIP
	I0916 13:53:37.507810  743496 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:53:37.508202  743496 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:53:37.508237  743496 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:53:37.508349  743496 host.go:66] Checking if "ha-190751" exists ...
	I0916 13:53:37.508652  743496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:53:37.508690  743496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:53:37.522830  743496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37249
	I0916 13:53:37.523285  743496 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:53:37.523733  743496 main.go:141] libmachine: Using API Version  1
	I0916 13:53:37.523749  743496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:53:37.524029  743496 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:53:37.524194  743496 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:53:37.524375  743496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:53:37.524401  743496 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:53:37.526935  743496 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:53:37.527357  743496 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:53:37.527386  743496 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:53:37.527507  743496 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:53:37.527656  743496 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:53:37.527789  743496 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:53:37.527926  743496 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:53:37.615354  743496 ssh_runner.go:195] Run: systemctl --version
	I0916 13:53:37.622188  743496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:53:37.637443  743496 kubeconfig.go:125] found "ha-190751" server: "https://192.168.39.254:8443"
	I0916 13:53:37.637489  743496 api_server.go:166] Checking apiserver status ...
	I0916 13:53:37.637525  743496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 13:53:37.653914  743496 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4927/cgroup
	W0916 13:53:37.667412  743496 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4927/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 13:53:37.667461  743496 ssh_runner.go:195] Run: ls
	I0916 13:53:37.671567  743496 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 13:53:37.676048  743496 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 13:53:37.676071  743496 status.go:422] ha-190751 apiserver status = Running (err=<nil>)
	I0916 13:53:37.676094  743496 status.go:257] ha-190751 status: &{Name:ha-190751 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 13:53:37.676121  743496 status.go:255] checking status of ha-190751-m02 ...
	I0916 13:53:37.676509  743496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:53:37.676553  743496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:53:37.691813  743496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35131
	I0916 13:53:37.692356  743496 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:53:37.692842  743496 main.go:141] libmachine: Using API Version  1
	I0916 13:53:37.692862  743496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:53:37.693236  743496 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:53:37.693466  743496 main.go:141] libmachine: (ha-190751-m02) Calling .GetState
	I0916 13:53:37.694969  743496 status.go:330] ha-190751-m02 host status = "Running" (err=<nil>)
	I0916 13:53:37.694985  743496 host.go:66] Checking if "ha-190751-m02" exists ...
	I0916 13:53:37.695286  743496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:53:37.695328  743496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:53:37.710445  743496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42303
	I0916 13:53:37.710918  743496 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:53:37.711415  743496 main.go:141] libmachine: Using API Version  1
	I0916 13:53:37.711434  743496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:53:37.711718  743496 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:53:37.711867  743496 main.go:141] libmachine: (ha-190751-m02) Calling .GetIP
	I0916 13:53:37.714235  743496 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:53:37.714673  743496 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:48:25 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:53:37.714701  743496 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:53:37.714826  743496 host.go:66] Checking if "ha-190751-m02" exists ...
	I0916 13:53:37.715123  743496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:53:37.715158  743496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:53:37.729427  743496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39785
	I0916 13:53:37.729902  743496 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:53:37.730592  743496 main.go:141] libmachine: Using API Version  1
	I0916 13:53:37.730612  743496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:53:37.730890  743496 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:53:37.731059  743496 main.go:141] libmachine: (ha-190751-m02) Calling .DriverName
	I0916 13:53:37.731208  743496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:53:37.731227  743496 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHHostname
	I0916 13:53:37.733813  743496 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:53:37.734239  743496 main.go:141] libmachine: (ha-190751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:52:c1", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:48:25 +0000 UTC Type:0 Mac:52:54:00:41:52:c1 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-190751-m02 Clientid:01:52:54:00:41:52:c1}
	I0916 13:53:37.734260  743496 main.go:141] libmachine: (ha-190751-m02) DBG | domain ha-190751-m02 has defined IP address 192.168.39.192 and MAC address 52:54:00:41:52:c1 in network mk-ha-190751
	I0916 13:53:37.734427  743496 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHPort
	I0916 13:53:37.734576  743496 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHKeyPath
	I0916 13:53:37.734730  743496 main.go:141] libmachine: (ha-190751-m02) Calling .GetSSHUsername
	I0916 13:53:37.734816  743496 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m02/id_rsa Username:docker}
	I0916 13:53:37.827529  743496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 13:53:37.848121  743496 kubeconfig.go:125] found "ha-190751" server: "https://192.168.39.254:8443"
	I0916 13:53:37.848163  743496 api_server.go:166] Checking apiserver status ...
	I0916 13:53:37.848207  743496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 13:53:37.865405  743496 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup
	W0916 13:53:37.876370  743496 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 13:53:37.876415  743496 ssh_runner.go:195] Run: ls
	I0916 13:53:37.880558  743496 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 13:53:37.887573  743496 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 13:53:37.887598  743496 status.go:422] ha-190751-m02 apiserver status = Running (err=<nil>)
	I0916 13:53:37.887610  743496 status.go:257] ha-190751-m02 status: &{Name:ha-190751-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 13:53:37.887627  743496 status.go:255] checking status of ha-190751-m04 ...
	I0916 13:53:37.888007  743496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:53:37.888052  743496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:53:37.903635  743496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37325
	I0916 13:53:37.903960  743496 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:53:37.904394  743496 main.go:141] libmachine: Using API Version  1
	I0916 13:53:37.904414  743496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:53:37.904710  743496 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:53:37.904876  743496 main.go:141] libmachine: (ha-190751-m04) Calling .GetState
	I0916 13:53:37.906256  743496 status.go:330] ha-190751-m04 host status = "Running" (err=<nil>)
	I0916 13:53:37.906270  743496 host.go:66] Checking if "ha-190751-m04" exists ...
	I0916 13:53:37.906545  743496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:53:37.906599  743496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:53:37.921660  743496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46847
	I0916 13:53:37.922015  743496 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:53:37.922513  743496 main.go:141] libmachine: Using API Version  1
	I0916 13:53:37.922530  743496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:53:37.922816  743496 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:53:37.922993  743496 main.go:141] libmachine: (ha-190751-m04) Calling .GetIP
	I0916 13:53:37.925397  743496 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:53:37.925801  743496 main.go:141] libmachine: (ha-190751-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c5:44", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:51:05 +0000 UTC Type:0 Mac:52:54:00:46:c5:44 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-190751-m04 Clientid:01:52:54:00:46:c5:44}
	I0916 13:53:37.925826  743496 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:53:37.925967  743496 host.go:66] Checking if "ha-190751-m04" exists ...
	I0916 13:53:37.926244  743496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:53:37.926277  743496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:53:37.941296  743496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43421
	I0916 13:53:37.941721  743496 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:53:37.942152  743496 main.go:141] libmachine: Using API Version  1
	I0916 13:53:37.942173  743496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:53:37.942455  743496 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:53:37.942604  743496 main.go:141] libmachine: (ha-190751-m04) Calling .DriverName
	I0916 13:53:37.942792  743496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 13:53:37.942826  743496 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHHostname
	I0916 13:53:37.945159  743496 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:53:37.945551  743496 main.go:141] libmachine: (ha-190751-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c5:44", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:51:05 +0000 UTC Type:0 Mac:52:54:00:46:c5:44 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-190751-m04 Clientid:01:52:54:00:46:c5:44}
	I0916 13:53:37.945566  743496 main.go:141] libmachine: (ha-190751-m04) DBG | domain ha-190751-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:46:c5:44 in network mk-ha-190751
	I0916 13:53:37.945647  743496 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHPort
	I0916 13:53:37.945805  743496 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHKeyPath
	I0916 13:53:37.945992  743496 main.go:141] libmachine: (ha-190751-m04) Calling .GetSSHUsername
	I0916 13:53:37.946144  743496 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751-m04/id_rsa Username:docker}
	W0916 13:53:56.389860  743496 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.17:22: connect: no route to host
	W0916 13:53:56.389963  743496 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.17:22: connect: no route to host
	E0916 13:53:56.389982  743496 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.17:22: connect: no route to host
	I0916 13:53:56.389991  743496 status.go:257] ha-190751-m04 status: &{Name:ha-190751-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0916 13:53:56.390053  743496 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.17:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-190751 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-190751 -n ha-190751
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-190751 logs -n 25: (1.57948506s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-190751 ssh -n ha-190751-m02 sudo cat                                          | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | /home/docker/cp-test_ha-190751-m03_ha-190751-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-190751 cp ha-190751-m03:/home/docker/cp-test.txt                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04:/home/docker/cp-test_ha-190751-m03_ha-190751-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n ha-190751-m04 sudo cat                                          | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | /home/docker/cp-test_ha-190751-m03_ha-190751-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-190751 cp testdata/cp-test.txt                                                | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-190751 cp ha-190751-m04:/home/docker/cp-test.txt                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3557247571/001/cp-test_ha-190751-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-190751 cp ha-190751-m04:/home/docker/cp-test.txt                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751:/home/docker/cp-test_ha-190751-m04_ha-190751.txt                       |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n ha-190751 sudo cat                                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | /home/docker/cp-test_ha-190751-m04_ha-190751.txt                                 |           |         |         |                     |                     |
	| cp      | ha-190751 cp ha-190751-m04:/home/docker/cp-test.txt                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m02:/home/docker/cp-test_ha-190751-m04_ha-190751-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n ha-190751-m02 sudo cat                                          | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | /home/docker/cp-test_ha-190751-m04_ha-190751-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-190751 cp ha-190751-m04:/home/docker/cp-test.txt                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m03:/home/docker/cp-test_ha-190751-m04_ha-190751-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n                                                                 | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | ha-190751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-190751 ssh -n ha-190751-m03 sudo cat                                          | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC | 16 Sep 24 13:41 UTC |
	|         | /home/docker/cp-test_ha-190751-m04_ha-190751-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-190751 node stop m02 -v=7                                                     | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-190751 node start m02 -v=7                                                    | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:43 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-190751 -v=7                                                           | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:44 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-190751 -v=7                                                                | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:44 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-190751 --wait=true -v=7                                                    | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:46 UTC | 16 Sep 24 13:51 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-190751                                                                | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:51 UTC |                     |
	| node    | ha-190751 node delete m03 -v=7                                                   | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:51 UTC | 16 Sep 24 13:51 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-190751 stop -v=7                                                              | ha-190751 | jenkins | v1.34.0 | 16 Sep 24 13:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 13:46:39
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 13:46:39.625470  741236 out.go:345] Setting OutFile to fd 1 ...
	I0916 13:46:39.625596  741236 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:46:39.625607  741236 out.go:358] Setting ErrFile to fd 2...
	I0916 13:46:39.625613  741236 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:46:39.625873  741236 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
	I0916 13:46:39.626429  741236 out.go:352] Setting JSON to false
	I0916 13:46:39.627418  741236 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":12549,"bootTime":1726481851,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 13:46:39.627473  741236 start.go:139] virtualization: kvm guest
	I0916 13:46:39.629923  741236 out.go:177] * [ha-190751] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 13:46:39.631246  741236 notify.go:220] Checking for updates...
	I0916 13:46:39.631259  741236 out.go:177]   - MINIKUBE_LOCATION=19652
	I0916 13:46:39.632860  741236 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 13:46:39.634084  741236 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19652-713072/kubeconfig
	I0916 13:46:39.635303  741236 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 13:46:39.636770  741236 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 13:46:39.638068  741236 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 13:46:39.639574  741236 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:46:39.639665  741236 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 13:46:39.640167  741236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:46:39.640206  741236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:46:39.655838  741236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42369
	I0916 13:46:39.656221  741236 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:46:39.656853  741236 main.go:141] libmachine: Using API Version  1
	I0916 13:46:39.656876  741236 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:46:39.657261  741236 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:46:39.657437  741236 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:46:39.692639  741236 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 13:46:39.693625  741236 start.go:297] selected driver: kvm2
	I0916 13:46:39.693637  741236 start.go:901] validating driver "kvm2" against &{Name:ha-190751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-190751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.192 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 13:46:39.693800  741236 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 13:46:39.694123  741236 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 13:46:39.694199  741236 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19652-713072/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 13:46:39.708560  741236 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 13:46:39.709256  741236 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 13:46:39.709299  741236 cni.go:84] Creating CNI manager for ""
	I0916 13:46:39.709354  741236 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0916 13:46:39.709426  741236 start.go:340] cluster config:
	{Name:ha-190751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-190751 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.192 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 13:46:39.709629  741236 iso.go:125] acquiring lock: {Name:mk66d96ffbd424a8ca76a8604dfbe200d58305de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 13:46:39.711081  741236 out.go:177] * Starting "ha-190751" primary control-plane node in "ha-190751" cluster
	I0916 13:46:39.712059  741236 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 13:46:39.712097  741236 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 13:46:39.712108  741236 cache.go:56] Caching tarball of preloaded images
	I0916 13:46:39.712192  741236 preload.go:172] Found /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 13:46:39.712206  741236 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 13:46:39.712337  741236 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/config.json ...
	I0916 13:46:39.712539  741236 start.go:360] acquireMachinesLock for ha-190751: {Name:mke8f8f8ba61009cdea7a3d88b50b9f6ae6e1362 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 13:46:39.712638  741236 start.go:364] duration metric: took 79.689µs to acquireMachinesLock for "ha-190751"
	I0916 13:46:39.712657  741236 start.go:96] Skipping create...Using existing machine configuration
	I0916 13:46:39.712667  741236 fix.go:54] fixHost starting: 
	I0916 13:46:39.712934  741236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:46:39.712971  741236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:46:39.726630  741236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36535
	I0916 13:46:39.727045  741236 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:46:39.727509  741236 main.go:141] libmachine: Using API Version  1
	I0916 13:46:39.727528  741236 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:46:39.727885  741236 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:46:39.728112  741236 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:46:39.728254  741236 main.go:141] libmachine: (ha-190751) Calling .GetState
	I0916 13:46:39.729940  741236 fix.go:112] recreateIfNeeded on ha-190751: state=Running err=<nil>
	W0916 13:46:39.729962  741236 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 13:46:39.734762  741236 out.go:177] * Updating the running kvm2 "ha-190751" VM ...
	I0916 13:46:39.736168  741236 machine.go:93] provisionDockerMachine start ...
	I0916 13:46:39.736191  741236 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:46:39.736429  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:46:39.739024  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:39.739520  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:46:39.739554  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:39.739694  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:46:39.739882  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:46:39.740020  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:46:39.740157  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:46:39.740352  741236 main.go:141] libmachine: Using SSH client type: native
	I0916 13:46:39.740538  741236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0916 13:46:39.740549  741236 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 13:46:39.858852  741236 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-190751
	
	I0916 13:46:39.858886  741236 main.go:141] libmachine: (ha-190751) Calling .GetMachineName
	I0916 13:46:39.859131  741236 buildroot.go:166] provisioning hostname "ha-190751"
	I0916 13:46:39.859161  741236 main.go:141] libmachine: (ha-190751) Calling .GetMachineName
	I0916 13:46:39.859334  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:46:39.862113  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:39.862529  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:46:39.862550  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:39.862658  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:46:39.862820  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:46:39.862944  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:46:39.863059  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:46:39.863169  741236 main.go:141] libmachine: Using SSH client type: native
	I0916 13:46:39.863337  741236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0916 13:46:39.863348  741236 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-190751 && echo "ha-190751" | sudo tee /etc/hostname
	I0916 13:46:39.987108  741236 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-190751
	
	I0916 13:46:39.987141  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:46:39.989879  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:39.990289  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:46:39.990314  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:39.990550  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:46:39.990738  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:46:39.990899  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:46:39.991024  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:46:39.991166  741236 main.go:141] libmachine: Using SSH client type: native
	I0916 13:46:39.991344  741236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0916 13:46:39.991359  741236 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-190751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-190751/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-190751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 13:46:40.103358  741236 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 13:46:40.103394  741236 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19652-713072/.minikube CaCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19652-713072/.minikube}
	I0916 13:46:40.103422  741236 buildroot.go:174] setting up certificates
	I0916 13:46:40.103435  741236 provision.go:84] configureAuth start
	I0916 13:46:40.103453  741236 main.go:141] libmachine: (ha-190751) Calling .GetMachineName
	I0916 13:46:40.103720  741236 main.go:141] libmachine: (ha-190751) Calling .GetIP
	I0916 13:46:40.106488  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:40.106915  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:46:40.106942  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:40.107152  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:46:40.109253  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:40.109653  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:46:40.109700  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:40.109870  741236 provision.go:143] copyHostCerts
	I0916 13:46:40.109912  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem
	I0916 13:46:40.109956  741236 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem, removing ...
	I0916 13:46:40.109968  741236 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem
	I0916 13:46:40.110048  741236 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem (1082 bytes)
	I0916 13:46:40.110156  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem
	I0916 13:46:40.110182  741236 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem, removing ...
	I0916 13:46:40.110189  741236 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem
	I0916 13:46:40.110231  741236 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem (1123 bytes)
	I0916 13:46:40.110296  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem
	I0916 13:46:40.110319  741236 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem, removing ...
	I0916 13:46:40.110325  741236 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem
	I0916 13:46:40.110365  741236 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem (1679 bytes)
	I0916 13:46:40.110445  741236 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem org=jenkins.ha-190751 san=[127.0.0.1 192.168.39.94 ha-190751 localhost minikube]
	I0916 13:46:40.284286  741236 provision.go:177] copyRemoteCerts
	I0916 13:46:40.284349  741236 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 13:46:40.284381  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:46:40.286985  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:40.287309  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:46:40.287335  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:40.287493  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:46:40.287683  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:46:40.287832  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:46:40.287996  741236 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:46:40.376067  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 13:46:40.376143  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0916 13:46:40.400945  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 13:46:40.401028  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 13:46:40.427679  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 13:46:40.427738  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 13:46:40.451973  741236 provision.go:87] duration metric: took 348.52093ms to configureAuth
	I0916 13:46:40.451997  741236 buildroot.go:189] setting minikube options for container-runtime
	I0916 13:46:40.452230  741236 config.go:182] Loaded profile config "ha-190751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:46:40.452331  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:46:40.455323  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:40.455765  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:46:40.455791  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:46:40.455917  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:46:40.456105  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:46:40.456305  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:46:40.456495  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:46:40.456659  741236 main.go:141] libmachine: Using SSH client type: native
	I0916 13:46:40.456857  741236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0916 13:46:40.456874  741236 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 13:48:11.229084  741236 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 13:48:11.229116  741236 machine.go:96] duration metric: took 1m31.492931394s to provisionDockerMachine
	I0916 13:48:11.229134  741236 start.go:293] postStartSetup for "ha-190751" (driver="kvm2")
	I0916 13:48:11.229147  741236 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 13:48:11.229224  741236 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:48:11.229607  741236 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 13:48:11.229646  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:48:11.232700  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:11.233147  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:48:11.233175  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:11.233322  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:48:11.233513  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:48:11.233682  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:48:11.233848  741236 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:48:11.320416  741236 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 13:48:11.324552  741236 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 13:48:11.324575  741236 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/addons for local assets ...
	I0916 13:48:11.324625  741236 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/files for local assets ...
	I0916 13:48:11.324710  741236 filesync.go:149] local asset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> 7205442.pem in /etc/ssl/certs
	I0916 13:48:11.324722  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> /etc/ssl/certs/7205442.pem
	I0916 13:48:11.324827  741236 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 13:48:11.333691  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 13:48:11.356643  741236 start.go:296] duration metric: took 127.495158ms for postStartSetup
	I0916 13:48:11.356684  741236 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:48:11.356935  741236 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0916 13:48:11.356962  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:48:11.359351  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:11.359712  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:48:11.359784  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:11.359844  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:48:11.360021  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:48:11.360156  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:48:11.360318  741236 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	W0916 13:48:11.443008  741236 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0916 13:48:11.443035  741236 fix.go:56] duration metric: took 1m31.730369023s for fixHost
	I0916 13:48:11.443054  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:48:11.445780  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:11.446128  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:48:11.446162  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:11.446231  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:48:11.446447  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:48:11.446565  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:48:11.446727  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:48:11.446867  741236 main.go:141] libmachine: Using SSH client type: native
	I0916 13:48:11.447048  741236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0916 13:48:11.447059  741236 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 13:48:11.554217  741236 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726494491.520423446
	
	I0916 13:48:11.554245  741236 fix.go:216] guest clock: 1726494491.520423446
	I0916 13:48:11.554255  741236 fix.go:229] Guest: 2024-09-16 13:48:11.520423446 +0000 UTC Remote: 2024-09-16 13:48:11.443041663 +0000 UTC m=+91.854073528 (delta=77.381783ms)
	I0916 13:48:11.554281  741236 fix.go:200] guest clock delta is within tolerance: 77.381783ms
	I0916 13:48:11.554288  741236 start.go:83] releasing machines lock for "ha-190751", held for 1m31.841639874s
	I0916 13:48:11.554312  741236 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:48:11.554534  741236 main.go:141] libmachine: (ha-190751) Calling .GetIP
	I0916 13:48:11.557156  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:11.557580  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:48:11.557603  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:11.557741  741236 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:48:11.558240  741236 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:48:11.558409  741236 main.go:141] libmachine: (ha-190751) Calling .DriverName
	I0916 13:48:11.558496  741236 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 13:48:11.558546  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:48:11.558661  741236 ssh_runner.go:195] Run: cat /version.json
	I0916 13:48:11.558689  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHHostname
	I0916 13:48:11.561199  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:11.561342  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:11.561601  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:48:11.561627  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:11.561732  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:48:11.561892  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:48:11.561919  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:48:11.561937  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:11.562073  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:48:11.562081  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHPort
	I0916 13:48:11.562237  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHKeyPath
	I0916 13:48:11.562235  741236 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:48:11.562371  741236 main.go:141] libmachine: (ha-190751) Calling .GetSSHUsername
	I0916 13:48:11.562481  741236 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/ha-190751/id_rsa Username:docker}
	I0916 13:48:11.674738  741236 ssh_runner.go:195] Run: systemctl --version
	I0916 13:48:11.680449  741236 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 13:48:11.841765  741236 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 13:48:11.849852  741236 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 13:48:11.849912  741236 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 13:48:11.859586  741236 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 13:48:11.859606  741236 start.go:495] detecting cgroup driver to use...
	I0916 13:48:11.859654  741236 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 13:48:11.877847  741236 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 13:48:11.892017  741236 docker.go:217] disabling cri-docker service (if available) ...
	I0916 13:48:11.892090  741236 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 13:48:11.906021  741236 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 13:48:11.919462  741236 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 13:48:12.065828  741236 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 13:48:12.217509  741236 docker.go:233] disabling docker service ...
	I0916 13:48:12.217617  741236 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 13:48:12.234145  741236 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 13:48:12.248297  741236 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 13:48:12.388682  741236 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 13:48:12.528445  741236 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 13:48:12.542085  741236 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 13:48:12.559524  741236 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 13:48:12.559590  741236 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:48:12.571961  741236 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 13:48:12.572018  741236 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:48:12.583400  741236 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:48:12.594211  741236 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:48:12.605692  741236 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 13:48:12.615785  741236 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:48:12.625651  741236 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:48:12.636001  741236 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 13:48:12.645941  741236 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 13:48:12.655062  741236 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 13:48:12.663929  741236 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 13:48:12.807271  741236 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 13:48:13.018807  741236 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 13:48:13.018881  741236 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 13:48:13.023794  741236 start.go:563] Will wait 60s for crictl version
	I0916 13:48:13.023841  741236 ssh_runner.go:195] Run: which crictl
	I0916 13:48:13.027625  741236 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 13:48:13.074513  741236 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 13:48:13.074611  741236 ssh_runner.go:195] Run: crio --version
	I0916 13:48:13.104737  741236 ssh_runner.go:195] Run: crio --version
	I0916 13:48:13.135324  741236 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 13:48:13.136654  741236 main.go:141] libmachine: (ha-190751) Calling .GetIP
	I0916 13:48:13.139202  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:13.139568  741236 main.go:141] libmachine: (ha-190751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:dd:8b", ip: ""} in network mk-ha-190751: {Iface:virbr1 ExpiryTime:2024-09-16 14:37:10 +0000 UTC Type:0 Mac:52:54:00:c8:dd:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-190751 Clientid:01:52:54:00:c8:dd:8b}
	I0916 13:48:13.139597  741236 main.go:141] libmachine: (ha-190751) DBG | domain ha-190751 has defined IP address 192.168.39.94 and MAC address 52:54:00:c8:dd:8b in network mk-ha-190751
	I0916 13:48:13.139779  741236 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 13:48:13.144424  741236 kubeadm.go:883] updating cluster {Name:ha-190751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-190751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.192 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 13:48:13.144568  741236 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 13:48:13.144632  741236 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 13:48:13.186085  741236 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 13:48:13.186106  741236 crio.go:433] Images already preloaded, skipping extraction
	I0916 13:48:13.186159  741236 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 13:48:13.216653  741236 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 13:48:13.216676  741236 cache_images.go:84] Images are preloaded, skipping loading
	I0916 13:48:13.216689  741236 kubeadm.go:934] updating node { 192.168.39.94 8443 v1.31.1 crio true true} ...
	I0916 13:48:13.216801  741236 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-190751 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-190751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 13:48:13.216863  741236 ssh_runner.go:195] Run: crio config
	I0916 13:48:13.260506  741236 cni.go:84] Creating CNI manager for ""
	I0916 13:48:13.260526  741236 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0916 13:48:13.260537  741236 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 13:48:13.260559  741236 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.94 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-190751 NodeName:ha-190751 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 13:48:13.260698  741236 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.94
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-190751"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 13:48:13.260719  741236 kube-vip.go:115] generating kube-vip config ...
	I0916 13:48:13.260759  741236 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 13:48:13.272030  741236 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 13:48:13.272144  741236 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 13:48:13.272196  741236 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 13:48:13.281569  741236 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 13:48:13.281649  741236 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 13:48:13.290638  741236 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0916 13:48:13.306198  741236 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 13:48:13.321505  741236 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0916 13:48:13.337208  741236 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0916 13:48:13.353736  741236 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 13:48:13.357394  741236 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 13:48:13.502995  741236 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 13:48:13.517369  741236 certs.go:68] Setting up /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751 for IP: 192.168.39.94
	I0916 13:48:13.517391  741236 certs.go:194] generating shared ca certs ...
	I0916 13:48:13.517412  741236 certs.go:226] acquiring lock for ca certs: {Name:mk25b35916ff3ff3777938e3e2b7794965f8a707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:48:13.517602  741236 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key
	I0916 13:48:13.517660  741236 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key
	I0916 13:48:13.517745  741236 certs.go:256] generating profile certs ...
	I0916 13:48:13.517932  741236 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/client.key
	I0916 13:48:13.517968  741236 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.ef12b01e
	I0916 13:48:13.517984  741236 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.ef12b01e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.94 192.168.39.192 192.168.39.134 192.168.39.254]
	I0916 13:48:13.658856  741236 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.ef12b01e ...
	I0916 13:48:13.658887  741236 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.ef12b01e: {Name:mk5128865dd3ed5cf8f80f0e3504eee8210f3b37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:48:13.659056  741236 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.ef12b01e ...
	I0916 13:48:13.659066  741236 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.ef12b01e: {Name:mk2b0a5cb0c64f285ce1d11db681fd7632720418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 13:48:13.659141  741236 certs.go:381] copying /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt.ef12b01e -> /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt
	I0916 13:48:13.659281  741236 certs.go:385] copying /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key.ef12b01e -> /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key
	I0916 13:48:13.659413  741236 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key
	I0916 13:48:13.659428  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 13:48:13.659441  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 13:48:13.659452  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 13:48:13.659476  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 13:48:13.659499  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 13:48:13.659509  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 13:48:13.659523  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 13:48:13.659562  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 13:48:13.659619  741236 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem (1338 bytes)
	W0916 13:48:13.659650  741236 certs.go:480] ignoring /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544_empty.pem, impossibly tiny 0 bytes
	I0916 13:48:13.659660  741236 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 13:48:13.659681  741236 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem (1082 bytes)
	I0916 13:48:13.659702  741236 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem (1123 bytes)
	I0916 13:48:13.659723  741236 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem (1679 bytes)
	I0916 13:48:13.659760  741236 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 13:48:13.659811  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem -> /usr/share/ca-certificates/720544.pem
	I0916 13:48:13.659828  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> /usr/share/ca-certificates/7205442.pem
	I0916 13:48:13.659840  741236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:48:13.660426  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 13:48:13.684690  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 13:48:13.706838  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 13:48:13.729327  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 13:48:13.751522  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0916 13:48:13.774140  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 13:48:13.797072  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 13:48:13.818969  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/ha-190751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 13:48:13.861887  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem --> /usr/share/ca-certificates/720544.pem (1338 bytes)
	I0916 13:48:13.886393  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /usr/share/ca-certificates/7205442.pem (1708 bytes)
	I0916 13:48:13.908526  741236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 13:48:13.930618  741236 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 13:48:13.945997  741236 ssh_runner.go:195] Run: openssl version
	I0916 13:48:13.951602  741236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/720544.pem && ln -fs /usr/share/ca-certificates/720544.pem /etc/ssl/certs/720544.pem"
	I0916 13:48:13.961514  741236 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/720544.pem
	I0916 13:48:13.965676  741236 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 13:33 /usr/share/ca-certificates/720544.pem
	I0916 13:48:13.965710  741236 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/720544.pem
	I0916 13:48:13.971089  741236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/720544.pem /etc/ssl/certs/51391683.0"
	I0916 13:48:13.979677  741236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7205442.pem && ln -fs /usr/share/ca-certificates/7205442.pem /etc/ssl/certs/7205442.pem"
	I0916 13:48:13.990388  741236 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7205442.pem
	I0916 13:48:13.994750  741236 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 13:33 /usr/share/ca-certificates/7205442.pem
	I0916 13:48:13.994795  741236 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7205442.pem
	I0916 13:48:14.000058  741236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7205442.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 13:48:14.008608  741236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 13:48:14.018567  741236 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:48:14.023148  741236 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 12:53 /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:48:14.023190  741236 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 13:48:14.028363  741236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 13:48:14.036850  741236 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 13:48:14.041422  741236 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 13:48:14.047282  741236 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 13:48:14.052398  741236 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 13:48:14.057515  741236 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 13:48:14.062672  741236 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 13:48:14.067782  741236 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 13:48:14.073033  741236 kubeadm.go:392] StartCluster: {Name:ha-190751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-190751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.192 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 13:48:14.073153  741236 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 13:48:14.073206  741236 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 13:48:14.111273  741236 cri.go:89] found id: "d8632a302625a774aeda4dc20b6685a2590ebfab7e534fcd2a864b4d7c73f4f1"
	I0916 13:48:14.111295  741236 cri.go:89] found id: "a9fd590fef01ea67abfba5099c5976e0f9a7071dc1d5440c355734d0d2c99e17"
	I0916 13:48:14.111301  741236 cri.go:89] found id: "653d7d20fc0c420b88d6cf3b91d680ee591f56c0e7d97b5ab4b0f7a32bd46d45"
	I0916 13:48:14.111321  741236 cri.go:89] found id: "e33b03d2f6fce87730d338d716b579f61fa7dca1205bac35abaf88257659f781"
	I0916 13:48:14.111326  741236 cri.go:89] found id: "5597ff6fa9128f07d2dc3f058b9b448395d0989aa657629ef5c6819b33cc8cb7"
	I0916 13:48:14.111330  741236 cri.go:89] found id: "85e2956fe35237a31eb3777a4db47ef14cfd27c1fa6b47b8e68d421b6f0388b0"
	I0916 13:48:14.111334  741236 cri.go:89] found id: "d2fb4efd07b928023ce922b08d4d29585e3080441cdb212649ac1338243874ee"
	I0916 13:48:14.111337  741236 cri.go:89] found id: "876c9f45c384802a996dd22d917975d86b875cbde33520b6bfb8ec6f84b39629"
	I0916 13:48:14.111340  741236 cri.go:89] found id: "ce48d6fe2a10977168e6aa4159b5fa451fbf190ee313d8d6500cf399312b4061"
	I0916 13:48:14.111345  741236 cri.go:89] found id: "0cd93f6d25b96fcafeadbe4368203439d003e6e60832e2405318039bac48cd90"
	I0916 13:48:14.111351  741236 cri.go:89] found id: "13c8d0e1fdcbee87a87cace216d5dc79bc82e8045e7d582390ca41efdbcadcad"
	I0916 13:48:14.111354  741236 cri.go:89] found id: "2cb375fdf3e21c70ce4d6d7afaeb7e323643bddc06490de3e9e9973f9817f85b"
	I0916 13:48:14.111360  741236 cri.go:89] found id: "3d2fdc916e364191824e8eeeeebd2bd4bde311ec642553730ff1fa83d5ae6b3c"
	I0916 13:48:14.111363  741236 cri.go:89] found id: ""
	I0916 13:48:14.111412  741236 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 16 13:53:56 ha-190751 crio[3513]: time="2024-09-16 13:53:56.992955988Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494836992933233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a970c86-7e20-40cc-b28a-5bf73cf86624 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:53:56 ha-190751 crio[3513]: time="2024-09-16 13:53:56.993425240Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3d35aec2-a923-4c27-9a6c-8a8ca2ff561a name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:53:56 ha-190751 crio[3513]: time="2024-09-16 13:53:56.993478175Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3d35aec2-a923-4c27-9a6c-8a8ca2ff561a name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:53:56 ha-190751 crio[3513]: time="2024-09-16 13:53:56.998397687Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68a17947275a885ac1338458515d3c3815f81f0646c7d0f59f5025fbcb246718,PodSandboxId:de7aec1e5e47fa83adcd5bb2d56dac6fedc270654003736649663b10a039a454,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726494585635614611,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f01b81dc-2ff8-41de-8c63-e09a0ead6545,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e02e885f6ff0ac9a43a7b7198e00c6c903eded4e3272b993d0acc2558a5663a,PodSandboxId:8d104a92ea828fa1eda8b699644db2047268468a477a700b9381c6d82c22dfd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726494573633416875,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae495349ac02bb4b5addcdcea0d25715,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d9edc7df5a2360bfab4a65fc63e9ce882e388f183f784c2b4e126b6614717bb,PodSandboxId:99daa5073d1f26dc7805036aa4f09150d16ea0b02f568549c7f09f415301efb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726494543639297101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a42ea5903905c847366e72d48200db,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98b64476badf43fdcb45977d256ecaf7ffa42fe0af3392b766efd09e5fac748c,PodSandboxId:de7aec1e5e47fa83adcd5bb2d56dac6fedc270654003736649663b10a039a454,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726494534634372815,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f01b81dc-2ff8-41de-8c63-e09a0ead6545,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f8a3dd5830446e4f24811190e3c69e2d5d610cf3df928a2966bd194e75a531,PodSandboxId:467454c1433729c870ed967292fa6888f5f05a6a9eeb462cbf661e9f65239b97,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726494530923508312,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lsqcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa0c38d7-fa7a-4b02-b417-1da8e210cc78,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c8c831977cdcd20f220c683022ec7858cf50dbcd786c60fdc6155f6bc7eb81,PodSandboxId:8d104a92ea828fa1eda8b699644db2047268468a477a700b9381c6d82c22dfd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726494530355993292,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae495349ac02bb4b5addcdcea0d25715,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fbd2ab36bd424a75a296c4c526c16aacceafc9d6282b06814fe7cf0b04a119,PodSandboxId:21a6ffa293c766ba33093469c1a549a649e2513f0ddd412dd71fb4efeab0a89e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726494514007319937,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67e8e356afe10ade9e2bb9eb90e11528,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aff424a1e0a36fa17a28c65bed39b131cd77e229d6a5125231b41cedffa463c9,PodSandboxId:2f8de5f3a3283ad6da2d0d331b8846f49c7ff1f7d6346ebee487ad3da80c0874,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726494497835612345,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gpb96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb699362-acf1-471c-8b39-8a7498a7da52,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:a2d90a9f3454116e0b4e843df06f66a7c76a063bd4dbea0132cc1d935208c271,PodSandboxId:2bf212396f86e717f6511c6f01c2db4695c6c9be6d50491e90c14829e49b1091,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726494497876326013,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9lw8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db48b82a19ccdd1356d28e98d010f8a3d3f0927a86408f5d028b655b376dc8ad,PodSandboxId:474617d62056c3c002f48a7616263236dc22f673d5d12a09335e8a051dcc7081,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726494497772321052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gzkpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0ada83-1020-4bd4-be70-9a1a5972ff59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{
\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f88a0c0f7b2943ff725145bc499f835202476a9fca62dec354a893db03f49b8f,PodSandboxId:b0afa0ba4326dc04f1f27aea371594cab14a18fb3648ea6e23bd80bdd0002769,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726494497609209176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-190
751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9922f803bd7b5d0ba2ffa0c06886b9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9d5a75c9054b2414f4c5763f394eb7a72f95e0360e67bf55e3b3ded96ccbd6e,PodSandboxId:379b8517e1b921e22e276aa378b254b916bd475f3463191fcc4436f52ece84b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726494497466465541,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9d7kt,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: ba8c34d1-5931-4e70-8d01-798817397f78,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56e43e1330c7a560e64d2d1d8d2047c7993487a6de8d12b05d4867bc2484e09d,PodSandboxId:f99d150e833d7697431bb77854b9749f955d6285cef08623d43511416ae3f61d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726494497538375864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2cc73ce1a8f746
d45b3276bee469d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d509e2938a032069254cbcb0c924947c72c27bc23984e04701fbe6caef46adad,PodSandboxId:99daa5073d1f26dc7805036aa4f09150d16ea0b02f568549c7f09f415301efb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726494497414585430,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a42ea5903905c847366e72d48200db,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ff16b4cf488d896605be284a1159f722aa4cc147bb74a8eeaf47bee3912ead0,PodSandboxId:70804a075dc34bfcfcd945e41bc9b9b50887dfbed8832df3453a49df237f3a10,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726494009959655688,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lsqcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa0c38d7-fa7a-4b02-b417-1da8e210cc78,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5597ff6fa9128f07d2dc3f058b9b448395d0989aa657629ef5c6819b33cc8cb7,PodSandboxId:faf5324ae84ec325360c692d7e663f4a36e234c8403a4e72f80d57211acd5a2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726493905851195361,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9lw8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e33b03d2f6fce87730d338d716b579f61fa7dca1205bac35abaf88257659f781,PodSandboxId:d74b47a92fc73e9c9e0646cddd475b1d9c4c084abec46863815d97b0f05bd238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726493905853300694,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gzkpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0ada83-1020-4bd4-be70-9a1a5972ff59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2fb4efd07b928023ce922b08d4d29585e3080441cdb212649ac1338243874ee,PodSandboxId:e227eb76eed28456da60c41632338b32cbb3ec7c34407c7745860a265438ce7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726493863271605664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9d7kt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba8c34d1-5931-4e70-8d01-798817397f78,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c9f45c384802a996dd22d917975d86b875cbde33520b6bfb8ec6f84b39629,PodSandboxId:06c5005bbb7151b021f0bc1b7f3e8818b673f7067ec8acf264d4919832abfb8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726493862131373043,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gpb96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb699362-acf1-471c-8b39-8a7498a7da52,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd93f6d25b96fcafeadbe4368203439d003e6e60832e2405318039bac48cd90,PodSandboxId:235857e1be3ea44c435d98b63c4e4bf947b816eb9121b4867264d82144ce5cc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726493850593271561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2cc73ce1a8f746d45b3276bee469d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2fdc916e364191824e8eeeeebd2bd4bde311ec642553730ff1fa83d5ae6b3c,PodSandboxId:2b68d5be2f2cfa03aea5cc5c13039a8c244e9a8260f12dd48010acb6164d6332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726493850404356901,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9922f803bd7b5d0ba2ffa0c06886b9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3d35aec2-a923-4c27-9a6c-8a8ca2ff561a name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:53:57 ha-190751 crio[3513]: time="2024-09-16 13:53:57.047151967Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0a5e2f7c-6a3d-4212-b126-13044d862bec name=/runtime.v1.RuntimeService/Version
	Sep 16 13:53:57 ha-190751 crio[3513]: time="2024-09-16 13:53:57.047240946Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0a5e2f7c-6a3d-4212-b126-13044d862bec name=/runtime.v1.RuntimeService/Version
	Sep 16 13:53:57 ha-190751 crio[3513]: time="2024-09-16 13:53:57.048605197Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=17a25c05-71ef-4290-be25-d9bed84187ef name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:53:57 ha-190751 crio[3513]: time="2024-09-16 13:53:57.049264452Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494837049233151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17a25c05-71ef-4290-be25-d9bed84187ef name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:53:57 ha-190751 crio[3513]: time="2024-09-16 13:53:57.049921454Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c601ebc-8d50-4928-80cc-5ad8a356e015 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:53:57 ha-190751 crio[3513]: time="2024-09-16 13:53:57.049995564Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c601ebc-8d50-4928-80cc-5ad8a356e015 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:53:57 ha-190751 crio[3513]: time="2024-09-16 13:53:57.050550939Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68a17947275a885ac1338458515d3c3815f81f0646c7d0f59f5025fbcb246718,PodSandboxId:de7aec1e5e47fa83adcd5bb2d56dac6fedc270654003736649663b10a039a454,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726494585635614611,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f01b81dc-2ff8-41de-8c63-e09a0ead6545,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e02e885f6ff0ac9a43a7b7198e00c6c903eded4e3272b993d0acc2558a5663a,PodSandboxId:8d104a92ea828fa1eda8b699644db2047268468a477a700b9381c6d82c22dfd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726494573633416875,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae495349ac02bb4b5addcdcea0d25715,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d9edc7df5a2360bfab4a65fc63e9ce882e388f183f784c2b4e126b6614717bb,PodSandboxId:99daa5073d1f26dc7805036aa4f09150d16ea0b02f568549c7f09f415301efb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726494543639297101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a42ea5903905c847366e72d48200db,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98b64476badf43fdcb45977d256ecaf7ffa42fe0af3392b766efd09e5fac748c,PodSandboxId:de7aec1e5e47fa83adcd5bb2d56dac6fedc270654003736649663b10a039a454,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726494534634372815,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f01b81dc-2ff8-41de-8c63-e09a0ead6545,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f8a3dd5830446e4f24811190e3c69e2d5d610cf3df928a2966bd194e75a531,PodSandboxId:467454c1433729c870ed967292fa6888f5f05a6a9eeb462cbf661e9f65239b97,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726494530923508312,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lsqcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa0c38d7-fa7a-4b02-b417-1da8e210cc78,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c8c831977cdcd20f220c683022ec7858cf50dbcd786c60fdc6155f6bc7eb81,PodSandboxId:8d104a92ea828fa1eda8b699644db2047268468a477a700b9381c6d82c22dfd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726494530355993292,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae495349ac02bb4b5addcdcea0d25715,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fbd2ab36bd424a75a296c4c526c16aacceafc9d6282b06814fe7cf0b04a119,PodSandboxId:21a6ffa293c766ba33093469c1a549a649e2513f0ddd412dd71fb4efeab0a89e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726494514007319937,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67e8e356afe10ade9e2bb9eb90e11528,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aff424a1e0a36fa17a28c65bed39b131cd77e229d6a5125231b41cedffa463c9,PodSandboxId:2f8de5f3a3283ad6da2d0d331b8846f49c7ff1f7d6346ebee487ad3da80c0874,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726494497835612345,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gpb96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb699362-acf1-471c-8b39-8a7498a7da52,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:a2d90a9f3454116e0b4e843df06f66a7c76a063bd4dbea0132cc1d935208c271,PodSandboxId:2bf212396f86e717f6511c6f01c2db4695c6c9be6d50491e90c14829e49b1091,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726494497876326013,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9lw8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db48b82a19ccdd1356d28e98d010f8a3d3f0927a86408f5d028b655b376dc8ad,PodSandboxId:474617d62056c3c002f48a7616263236dc22f673d5d12a09335e8a051dcc7081,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726494497772321052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gzkpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0ada83-1020-4bd4-be70-9a1a5972ff59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{
\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f88a0c0f7b2943ff725145bc499f835202476a9fca62dec354a893db03f49b8f,PodSandboxId:b0afa0ba4326dc04f1f27aea371594cab14a18fb3648ea6e23bd80bdd0002769,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726494497609209176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-190
751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9922f803bd7b5d0ba2ffa0c06886b9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9d5a75c9054b2414f4c5763f394eb7a72f95e0360e67bf55e3b3ded96ccbd6e,PodSandboxId:379b8517e1b921e22e276aa378b254b916bd475f3463191fcc4436f52ece84b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726494497466465541,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9d7kt,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: ba8c34d1-5931-4e70-8d01-798817397f78,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56e43e1330c7a560e64d2d1d8d2047c7993487a6de8d12b05d4867bc2484e09d,PodSandboxId:f99d150e833d7697431bb77854b9749f955d6285cef08623d43511416ae3f61d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726494497538375864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2cc73ce1a8f746
d45b3276bee469d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d509e2938a032069254cbcb0c924947c72c27bc23984e04701fbe6caef46adad,PodSandboxId:99daa5073d1f26dc7805036aa4f09150d16ea0b02f568549c7f09f415301efb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726494497414585430,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a42ea5903905c847366e72d48200db,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ff16b4cf488d896605be284a1159f722aa4cc147bb74a8eeaf47bee3912ead0,PodSandboxId:70804a075dc34bfcfcd945e41bc9b9b50887dfbed8832df3453a49df237f3a10,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726494009959655688,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lsqcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa0c38d7-fa7a-4b02-b417-1da8e210cc78,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5597ff6fa9128f07d2dc3f058b9b448395d0989aa657629ef5c6819b33cc8cb7,PodSandboxId:faf5324ae84ec325360c692d7e663f4a36e234c8403a4e72f80d57211acd5a2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726493905851195361,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9lw8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e33b03d2f6fce87730d338d716b579f61fa7dca1205bac35abaf88257659f781,PodSandboxId:d74b47a92fc73e9c9e0646cddd475b1d9c4c084abec46863815d97b0f05bd238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726493905853300694,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gzkpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0ada83-1020-4bd4-be70-9a1a5972ff59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2fb4efd07b928023ce922b08d4d29585e3080441cdb212649ac1338243874ee,PodSandboxId:e227eb76eed28456da60c41632338b32cbb3ec7c34407c7745860a265438ce7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726493863271605664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9d7kt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba8c34d1-5931-4e70-8d01-798817397f78,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c9f45c384802a996dd22d917975d86b875cbde33520b6bfb8ec6f84b39629,PodSandboxId:06c5005bbb7151b021f0bc1b7f3e8818b673f7067ec8acf264d4919832abfb8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726493862131373043,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gpb96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb699362-acf1-471c-8b39-8a7498a7da52,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd93f6d25b96fcafeadbe4368203439d003e6e60832e2405318039bac48cd90,PodSandboxId:235857e1be3ea44c435d98b63c4e4bf947b816eb9121b4867264d82144ce5cc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726493850593271561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2cc73ce1a8f746d45b3276bee469d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2fdc916e364191824e8eeeeebd2bd4bde311ec642553730ff1fa83d5ae6b3c,PodSandboxId:2b68d5be2f2cfa03aea5cc5c13039a8c244e9a8260f12dd48010acb6164d6332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726493850404356901,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9922f803bd7b5d0ba2ffa0c06886b9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3c601ebc-8d50-4928-80cc-5ad8a356e015 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:53:57 ha-190751 crio[3513]: time="2024-09-16 13:53:57.098382603Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=55e2bd15-1c5e-468f-a166-87f86787c873 name=/runtime.v1.RuntimeService/Version
	Sep 16 13:53:57 ha-190751 crio[3513]: time="2024-09-16 13:53:57.098451118Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=55e2bd15-1c5e-468f-a166-87f86787c873 name=/runtime.v1.RuntimeService/Version
	Sep 16 13:53:57 ha-190751 crio[3513]: time="2024-09-16 13:53:57.099445949Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bf49b32a-b1a7-4dd0-a72d-6940047e34dd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:53:57 ha-190751 crio[3513]: time="2024-09-16 13:53:57.100448514Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494837100422501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf49b32a-b1a7-4dd0-a72d-6940047e34dd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:53:57 ha-190751 crio[3513]: time="2024-09-16 13:53:57.101211918Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27b0585b-9b80-4b95-8ab1-766e2969baae name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:53:57 ha-190751 crio[3513]: time="2024-09-16 13:53:57.101266287Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27b0585b-9b80-4b95-8ab1-766e2969baae name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:53:57 ha-190751 crio[3513]: time="2024-09-16 13:53:57.101668449Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68a17947275a885ac1338458515d3c3815f81f0646c7d0f59f5025fbcb246718,PodSandboxId:de7aec1e5e47fa83adcd5bb2d56dac6fedc270654003736649663b10a039a454,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726494585635614611,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f01b81dc-2ff8-41de-8c63-e09a0ead6545,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e02e885f6ff0ac9a43a7b7198e00c6c903eded4e3272b993d0acc2558a5663a,PodSandboxId:8d104a92ea828fa1eda8b699644db2047268468a477a700b9381c6d82c22dfd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726494573633416875,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae495349ac02bb4b5addcdcea0d25715,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d9edc7df5a2360bfab4a65fc63e9ce882e388f183f784c2b4e126b6614717bb,PodSandboxId:99daa5073d1f26dc7805036aa4f09150d16ea0b02f568549c7f09f415301efb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726494543639297101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a42ea5903905c847366e72d48200db,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98b64476badf43fdcb45977d256ecaf7ffa42fe0af3392b766efd09e5fac748c,PodSandboxId:de7aec1e5e47fa83adcd5bb2d56dac6fedc270654003736649663b10a039a454,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726494534634372815,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f01b81dc-2ff8-41de-8c63-e09a0ead6545,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f8a3dd5830446e4f24811190e3c69e2d5d610cf3df928a2966bd194e75a531,PodSandboxId:467454c1433729c870ed967292fa6888f5f05a6a9eeb462cbf661e9f65239b97,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726494530923508312,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lsqcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa0c38d7-fa7a-4b02-b417-1da8e210cc78,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c8c831977cdcd20f220c683022ec7858cf50dbcd786c60fdc6155f6bc7eb81,PodSandboxId:8d104a92ea828fa1eda8b699644db2047268468a477a700b9381c6d82c22dfd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726494530355993292,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae495349ac02bb4b5addcdcea0d25715,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fbd2ab36bd424a75a296c4c526c16aacceafc9d6282b06814fe7cf0b04a119,PodSandboxId:21a6ffa293c766ba33093469c1a549a649e2513f0ddd412dd71fb4efeab0a89e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726494514007319937,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67e8e356afe10ade9e2bb9eb90e11528,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aff424a1e0a36fa17a28c65bed39b131cd77e229d6a5125231b41cedffa463c9,PodSandboxId:2f8de5f3a3283ad6da2d0d331b8846f49c7ff1f7d6346ebee487ad3da80c0874,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726494497835612345,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gpb96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb699362-acf1-471c-8b39-8a7498a7da52,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:a2d90a9f3454116e0b4e843df06f66a7c76a063bd4dbea0132cc1d935208c271,PodSandboxId:2bf212396f86e717f6511c6f01c2db4695c6c9be6d50491e90c14829e49b1091,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726494497876326013,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9lw8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db48b82a19ccdd1356d28e98d010f8a3d3f0927a86408f5d028b655b376dc8ad,PodSandboxId:474617d62056c3c002f48a7616263236dc22f673d5d12a09335e8a051dcc7081,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726494497772321052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gzkpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0ada83-1020-4bd4-be70-9a1a5972ff59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{
\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f88a0c0f7b2943ff725145bc499f835202476a9fca62dec354a893db03f49b8f,PodSandboxId:b0afa0ba4326dc04f1f27aea371594cab14a18fb3648ea6e23bd80bdd0002769,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726494497609209176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-190
751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9922f803bd7b5d0ba2ffa0c06886b9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9d5a75c9054b2414f4c5763f394eb7a72f95e0360e67bf55e3b3ded96ccbd6e,PodSandboxId:379b8517e1b921e22e276aa378b254b916bd475f3463191fcc4436f52ece84b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726494497466465541,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9d7kt,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: ba8c34d1-5931-4e70-8d01-798817397f78,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56e43e1330c7a560e64d2d1d8d2047c7993487a6de8d12b05d4867bc2484e09d,PodSandboxId:f99d150e833d7697431bb77854b9749f955d6285cef08623d43511416ae3f61d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726494497538375864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2cc73ce1a8f746
d45b3276bee469d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d509e2938a032069254cbcb0c924947c72c27bc23984e04701fbe6caef46adad,PodSandboxId:99daa5073d1f26dc7805036aa4f09150d16ea0b02f568549c7f09f415301efb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726494497414585430,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a42ea5903905c847366e72d48200db,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ff16b4cf488d896605be284a1159f722aa4cc147bb74a8eeaf47bee3912ead0,PodSandboxId:70804a075dc34bfcfcd945e41bc9b9b50887dfbed8832df3453a49df237f3a10,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726494009959655688,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lsqcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa0c38d7-fa7a-4b02-b417-1da8e210cc78,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5597ff6fa9128f07d2dc3f058b9b448395d0989aa657629ef5c6819b33cc8cb7,PodSandboxId:faf5324ae84ec325360c692d7e663f4a36e234c8403a4e72f80d57211acd5a2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726493905851195361,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9lw8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e33b03d2f6fce87730d338d716b579f61fa7dca1205bac35abaf88257659f781,PodSandboxId:d74b47a92fc73e9c9e0646cddd475b1d9c4c084abec46863815d97b0f05bd238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726493905853300694,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gzkpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0ada83-1020-4bd4-be70-9a1a5972ff59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2fb4efd07b928023ce922b08d4d29585e3080441cdb212649ac1338243874ee,PodSandboxId:e227eb76eed28456da60c41632338b32cbb3ec7c34407c7745860a265438ce7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726493863271605664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9d7kt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba8c34d1-5931-4e70-8d01-798817397f78,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c9f45c384802a996dd22d917975d86b875cbde33520b6bfb8ec6f84b39629,PodSandboxId:06c5005bbb7151b021f0bc1b7f3e8818b673f7067ec8acf264d4919832abfb8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726493862131373043,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gpb96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb699362-acf1-471c-8b39-8a7498a7da52,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd93f6d25b96fcafeadbe4368203439d003e6e60832e2405318039bac48cd90,PodSandboxId:235857e1be3ea44c435d98b63c4e4bf947b816eb9121b4867264d82144ce5cc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726493850593271561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2cc73ce1a8f746d45b3276bee469d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2fdc916e364191824e8eeeeebd2bd4bde311ec642553730ff1fa83d5ae6b3c,PodSandboxId:2b68d5be2f2cfa03aea5cc5c13039a8c244e9a8260f12dd48010acb6164d6332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726493850404356901,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9922f803bd7b5d0ba2ffa0c06886b9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=27b0585b-9b80-4b95-8ab1-766e2969baae name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:53:57 ha-190751 crio[3513]: time="2024-09-16 13:53:57.140807706Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=73b34c05-63fb-45f0-a295-8bb0fd8d5608 name=/runtime.v1.RuntimeService/Version
	Sep 16 13:53:57 ha-190751 crio[3513]: time="2024-09-16 13:53:57.140927163Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=73b34c05-63fb-45f0-a295-8bb0fd8d5608 name=/runtime.v1.RuntimeService/Version
	Sep 16 13:53:57 ha-190751 crio[3513]: time="2024-09-16 13:53:57.143028513Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2a3449d9-abf4-46e7-aadd-de6c2283cf1c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:53:57 ha-190751 crio[3513]: time="2024-09-16 13:53:57.143747832Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494837143724852,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2a3449d9-abf4-46e7-aadd-de6c2283cf1c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 13:53:57 ha-190751 crio[3513]: time="2024-09-16 13:53:57.144338873Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d913db72-1249-4299-8098-692045e0790c name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:53:57 ha-190751 crio[3513]: time="2024-09-16 13:53:57.144410312Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d913db72-1249-4299-8098-692045e0790c name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 13:53:57 ha-190751 crio[3513]: time="2024-09-16 13:53:57.144812910Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68a17947275a885ac1338458515d3c3815f81f0646c7d0f59f5025fbcb246718,PodSandboxId:de7aec1e5e47fa83adcd5bb2d56dac6fedc270654003736649663b10a039a454,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726494585635614611,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f01b81dc-2ff8-41de-8c63-e09a0ead6545,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e02e885f6ff0ac9a43a7b7198e00c6c903eded4e3272b993d0acc2558a5663a,PodSandboxId:8d104a92ea828fa1eda8b699644db2047268468a477a700b9381c6d82c22dfd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726494573633416875,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae495349ac02bb4b5addcdcea0d25715,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d9edc7df5a2360bfab4a65fc63e9ce882e388f183f784c2b4e126b6614717bb,PodSandboxId:99daa5073d1f26dc7805036aa4f09150d16ea0b02f568549c7f09f415301efb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726494543639297101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a42ea5903905c847366e72d48200db,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98b64476badf43fdcb45977d256ecaf7ffa42fe0af3392b766efd09e5fac748c,PodSandboxId:de7aec1e5e47fa83adcd5bb2d56dac6fedc270654003736649663b10a039a454,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726494534634372815,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f01b81dc-2ff8-41de-8c63-e09a0ead6545,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f8a3dd5830446e4f24811190e3c69e2d5d610cf3df928a2966bd194e75a531,PodSandboxId:467454c1433729c870ed967292fa6888f5f05a6a9eeb462cbf661e9f65239b97,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726494530923508312,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lsqcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa0c38d7-fa7a-4b02-b417-1da8e210cc78,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c8c831977cdcd20f220c683022ec7858cf50dbcd786c60fdc6155f6bc7eb81,PodSandboxId:8d104a92ea828fa1eda8b699644db2047268468a477a700b9381c6d82c22dfd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726494530355993292,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae495349ac02bb4b5addcdcea0d25715,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fbd2ab36bd424a75a296c4c526c16aacceafc9d6282b06814fe7cf0b04a119,PodSandboxId:21a6ffa293c766ba33093469c1a549a649e2513f0ddd412dd71fb4efeab0a89e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726494514007319937,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67e8e356afe10ade9e2bb9eb90e11528,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aff424a1e0a36fa17a28c65bed39b131cd77e229d6a5125231b41cedffa463c9,PodSandboxId:2f8de5f3a3283ad6da2d0d331b8846f49c7ff1f7d6346ebee487ad3da80c0874,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726494497835612345,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gpb96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb699362-acf1-471c-8b39-8a7498a7da52,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:a2d90a9f3454116e0b4e843df06f66a7c76a063bd4dbea0132cc1d935208c271,PodSandboxId:2bf212396f86e717f6511c6f01c2db4695c6c9be6d50491e90c14829e49b1091,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726494497876326013,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9lw8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db48b82a19ccdd1356d28e98d010f8a3d3f0927a86408f5d028b655b376dc8ad,PodSandboxId:474617d62056c3c002f48a7616263236dc22f673d5d12a09335e8a051dcc7081,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726494497772321052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gzkpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0ada83-1020-4bd4-be70-9a1a5972ff59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{
\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f88a0c0f7b2943ff725145bc499f835202476a9fca62dec354a893db03f49b8f,PodSandboxId:b0afa0ba4326dc04f1f27aea371594cab14a18fb3648ea6e23bd80bdd0002769,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726494497609209176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-190
751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9922f803bd7b5d0ba2ffa0c06886b9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9d5a75c9054b2414f4c5763f394eb7a72f95e0360e67bf55e3b3ded96ccbd6e,PodSandboxId:379b8517e1b921e22e276aa378b254b916bd475f3463191fcc4436f52ece84b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726494497466465541,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9d7kt,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: ba8c34d1-5931-4e70-8d01-798817397f78,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56e43e1330c7a560e64d2d1d8d2047c7993487a6de8d12b05d4867bc2484e09d,PodSandboxId:f99d150e833d7697431bb77854b9749f955d6285cef08623d43511416ae3f61d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726494497538375864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2cc73ce1a8f746
d45b3276bee469d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d509e2938a032069254cbcb0c924947c72c27bc23984e04701fbe6caef46adad,PodSandboxId:99daa5073d1f26dc7805036aa4f09150d16ea0b02f568549c7f09f415301efb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726494497414585430,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a42ea5903905c847366e72d48200db,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ff16b4cf488d896605be284a1159f722aa4cc147bb74a8eeaf47bee3912ead0,PodSandboxId:70804a075dc34bfcfcd945e41bc9b9b50887dfbed8832df3453a49df237f3a10,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726494009959655688,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lsqcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa0c38d7-fa7a-4b02-b417-1da8e210cc78,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5597ff6fa9128f07d2dc3f058b9b448395d0989aa657629ef5c6819b33cc8cb7,PodSandboxId:faf5324ae84ec325360c692d7e663f4a36e234c8403a4e72f80d57211acd5a2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726493905851195361,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9lw8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ae9b63-eb5d-486e-a9f1-89edb7ffc3a9,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e33b03d2f6fce87730d338d716b579f61fa7dca1205bac35abaf88257659f781,PodSandboxId:d74b47a92fc73e9c9e0646cddd475b1d9c4c084abec46863815d97b0f05bd238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726493905853300694,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gzkpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0ada83-1020-4bd4-be70-9a1a5972ff59,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2fb4efd07b928023ce922b08d4d29585e3080441cdb212649ac1338243874ee,PodSandboxId:e227eb76eed28456da60c41632338b32cbb3ec7c34407c7745860a265438ce7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726493863271605664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9d7kt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba8c34d1-5931-4e70-8d01-798817397f78,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c9f45c384802a996dd22d917975d86b875cbde33520b6bfb8ec6f84b39629,PodSandboxId:06c5005bbb7151b021f0bc1b7f3e8818b673f7067ec8acf264d4919832abfb8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726493862131373043,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gpb96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb699362-acf1-471c-8b39-8a7498a7da52,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd93f6d25b96fcafeadbe4368203439d003e6e60832e2405318039bac48cd90,PodSandboxId:235857e1be3ea44c435d98b63c4e4bf947b816eb9121b4867264d82144ce5cc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726493850593271561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2cc73ce1a8f746d45b3276bee469d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2fdc916e364191824e8eeeeebd2bd4bde311ec642553730ff1fa83d5ae6b3c,PodSandboxId:2b68d5be2f2cfa03aea5cc5c13039a8c244e9a8260f12dd48010acb6164d6332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726493850404356901,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-190751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9922f803bd7b5d0ba2ffa0c06886b9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d913db72-1249-4299-8098-692045e0790c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	68a17947275a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   de7aec1e5e47f       storage-provisioner
	5e02e885f6ff0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   3                   8d104a92ea828       kube-controller-manager-ha-190751
	8d9edc7df5a23       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            3                   99daa5073d1f2       kube-apiserver-ha-190751
	98b64476badf4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   de7aec1e5e47f       storage-provisioner
	40f8a3dd58304       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   467454c143372       busybox-7dff88458-lsqcp
	19c8c831977cd       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      5 minutes ago       Exited              kube-controller-manager   2                   8d104a92ea828       kube-controller-manager-ha-190751
	46fbd2ab36bd4       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   21a6ffa293c76       kube-vip-ha-190751
	a2d90a9f34541       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   2bf212396f86e       coredns-7c65d6cfc9-9lw8n
	aff424a1e0a36       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   2f8de5f3a3283       kindnet-gpb96
	db48b82a19ccd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   474617d62056c       coredns-7c65d6cfc9-gzkpj
	f88a0c0f7b294       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      5 minutes ago       Running             kube-scheduler            1                   b0afa0ba4326d       kube-scheduler-ha-190751
	56e43e1330c7a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   f99d150e833d7       etcd-ha-190751
	d9d5a75c9054b       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      5 minutes ago       Running             kube-proxy                1                   379b8517e1b92       kube-proxy-9d7kt
	d509e2938a032       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      5 minutes ago       Exited              kube-apiserver            2                   99daa5073d1f2       kube-apiserver-ha-190751
	1ff16b4cf488d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   70804a075dc34       busybox-7dff88458-lsqcp
	e33b03d2f6fce       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   d74b47a92fc73       coredns-7c65d6cfc9-gzkpj
	5597ff6fa9128       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   faf5324ae84ec       coredns-7c65d6cfc9-9lw8n
	d2fb4efd07b92       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      16 minutes ago      Exited              kube-proxy                0                   e227eb76eed28       kube-proxy-9d7kt
	876c9f45c3848       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      16 minutes ago      Exited              kindnet-cni               0                   06c5005bbb715       kindnet-gpb96
	0cd93f6d25b96       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      16 minutes ago      Exited              etcd                      0                   235857e1be3ea       etcd-ha-190751
	3d2fdc916e364       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      16 minutes ago      Exited              kube-scheduler            0                   2b68d5be2f2cf       kube-scheduler-ha-190751
	
	
	==> coredns [5597ff6fa9128f07d2dc3f058b9b448395d0989aa657629ef5c6819b33cc8cb7] <==
	[INFO] 10.244.2.2:39675 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165265s
	[INFO] 10.244.2.2:37048 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001066948s
	[INFO] 10.244.2.2:56795 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069535s
	[INFO] 10.244.1.2:57890 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135841s
	[INFO] 10.244.1.2:47650 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001636029s
	[INFO] 10.244.1.2:50206 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099676s
	[INFO] 10.244.1.2:55092 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109421s
	[INFO] 10.244.0.4:53870 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097861s
	[INFO] 10.244.0.4:42443 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000049844s
	[INFO] 10.244.0.4:52687 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057203s
	[INFO] 10.244.2.2:34837 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122205s
	[INFO] 10.244.2.2:39661 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123335s
	[INFO] 10.244.2.2:52074 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080782s
	[INFO] 10.244.1.2:41492 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098139s
	[INFO] 10.244.1.2:49674 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088502s
	[INFO] 10.244.0.4:53518 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000259854s
	[INFO] 10.244.0.4:41118 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000155352s
	[INFO] 10.244.0.4:33823 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000119363s
	[INFO] 10.244.2.2:44582 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000180459s
	[INFO] 10.244.2.2:52118 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000196503s
	[INFO] 10.244.1.2:43708 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011298s
	[INFO] 10.244.1.2:42623 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00011952s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1775&timeout=9m50s&timeoutSeconds=590&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [a2d90a9f3454116e0b4e843df06f66a7c76a063bd4dbea0132cc1d935208c271] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:46722->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1823141026]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 13:48:29.548) (total time: 13378ms):
	Trace[1823141026]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:46722->10.96.0.1:443: read: connection reset by peer 13377ms (13:48:42.926)
	Trace[1823141026]: [13.378031764s] [13.378031764s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:46722->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:45484->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:45484->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [db48b82a19ccdd1356d28e98d010f8a3d3f0927a86408f5d028b655b376dc8ad] <==
	[INFO] plugin/kubernetes: Trace[2045556924]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 13:48:29.278) (total time: 10001ms):
	Trace[2045556924]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:48:39.279)
	Trace[2045556924]: [10.001741529s] [10.001741529s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e33b03d2f6fce87730d338d716b579f61fa7dca1205bac35abaf88257659f781] <==
	[INFO] 10.244.1.2:37179 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001780819s
	[INFO] 10.244.0.4:50469 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000268768s
	[INFO] 10.244.0.4:48039 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000163904s
	[INFO] 10.244.0.4:34482 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084666s
	[INFO] 10.244.0.4:39892 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003221704s
	[INFO] 10.244.0.4:58788 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139358s
	[INFO] 10.244.2.2:57520 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099764s
	[INFO] 10.244.2.2:33023 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000142913s
	[INFO] 10.244.2.2:46886 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000071348s
	[INFO] 10.244.1.2:48181 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120675s
	[INFO] 10.244.1.2:46254 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007984s
	[INFO] 10.244.1.2:51236 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001105782s
	[INFO] 10.244.1.2:43880 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069986s
	[INFO] 10.244.0.4:51480 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109815s
	[INFO] 10.244.2.2:33439 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156091s
	[INFO] 10.244.1.2:40338 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000202214s
	[INFO] 10.244.1.2:41511 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135597s
	[INFO] 10.244.0.4:57318 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142285s
	[INFO] 10.244.2.2:51122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159294s
	[INFO] 10.244.2.2:45477 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00016112s
	[INFO] 10.244.1.2:53140 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015857s
	[INFO] 10.244.1.2:56526 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000182857s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1775&timeout=7m18s&timeoutSeconds=438&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> describe nodes <==
	Name:               ha-190751
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-190751
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86
	                    minikube.k8s.io/name=ha-190751
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T13_37_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 13:37:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-190751
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 13:53:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 13:49:02 +0000   Mon, 16 Sep 2024 13:37:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 13:49:02 +0000   Mon, 16 Sep 2024 13:37:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 13:49:02 +0000   Mon, 16 Sep 2024 13:37:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 13:49:02 +0000   Mon, 16 Sep 2024 13:38:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.94
	  Hostname:    ha-190751
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 413212b342c542b3a63285d76f88cc9f
	  System UUID:                413212b3-42c5-42b3-a632-85d76f88cc9f
	  Boot ID:                    757a1925-23d7-4d65-93ec-732a8b69642f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lsqcp              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-9lw8n             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-gzkpj             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-ha-190751                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kindnet-gpb96                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-apiserver-ha-190751             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-190751    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-9d7kt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-190751             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-190751                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m55s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-190751 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-190751 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-190751 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           16m                    node-controller  Node ha-190751 event: Registered Node ha-190751 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-190751 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-190751 event: Registered Node ha-190751 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-190751 event: Registered Node ha-190751 in Controller
	  Warning  ContainerGCFailed        6m18s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             5m44s (x3 over 6m33s)  kubelet          Node ha-190751 status is now: NodeNotReady
	  Normal   RegisteredNode           4m57s                  node-controller  Node ha-190751 event: Registered Node ha-190751 in Controller
	  Normal   RegisteredNode           4m21s                  node-controller  Node ha-190751 event: Registered Node ha-190751 in Controller
	  Normal   RegisteredNode           3m17s                  node-controller  Node ha-190751 event: Registered Node ha-190751 in Controller
	
	
	Name:               ha-190751-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-190751-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86
	                    minikube.k8s.io/name=ha-190751
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T13_38_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 13:38:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-190751-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 13:53:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 13:49:46 +0000   Mon, 16 Sep 2024 13:49:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 13:49:46 +0000   Mon, 16 Sep 2024 13:49:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 13:49:46 +0000   Mon, 16 Sep 2024 13:49:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 13:49:46 +0000   Mon, 16 Sep 2024 13:49:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.192
	  Hostname:    ha-190751-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 550acf86555f4901ac21dc9dc8bbc28f
	  System UUID:                550acf86-555f-4901-ac21-dc9dc8bbc28f
	  Boot ID:                    6b926d7d-06da-4813-88d7-fe05ddd773b3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wnt5k                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-190751-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-qfl9j                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-190751-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-190751-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-24q9n                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-190751-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-190751-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m38s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-190751-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-190751-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-190751-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-190751-m02 event: Registered Node ha-190751-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-190751-m02 event: Registered Node ha-190751-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-190751-m02 event: Registered Node ha-190751-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-190751-m02 status is now: NodeNotReady
	  Normal  Starting                 5m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m20s (x8 over 5m20s)  kubelet          Node ha-190751-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m20s (x8 over 5m20s)  kubelet          Node ha-190751-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m20s (x7 over 5m20s)  kubelet          Node ha-190751-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m57s                  node-controller  Node ha-190751-m02 event: Registered Node ha-190751-m02 in Controller
	  Normal  RegisteredNode           4m21s                  node-controller  Node ha-190751-m02 event: Registered Node ha-190751-m02 in Controller
	  Normal  RegisteredNode           3m17s                  node-controller  Node ha-190751-m02 event: Registered Node ha-190751-m02 in Controller
	
	
	Name:               ha-190751-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-190751-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86
	                    minikube.k8s.io/name=ha-190751
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T13_40_46_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 13:40:46 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-190751-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 13:51:31 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 16 Sep 2024 13:51:11 +0000   Mon, 16 Sep 2024 13:52:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 16 Sep 2024 13:51:11 +0000   Mon, 16 Sep 2024 13:52:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 16 Sep 2024 13:51:11 +0000   Mon, 16 Sep 2024 13:52:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 16 Sep 2024 13:51:11 +0000   Mon, 16 Sep 2024 13:52:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-190751-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 99332c0e26304b3097b2fce26060f009
	  System UUID:                99332c0e-2630-4b30-97b2-fce26060f009
	  Boot ID:                    787b425c-db32-4bf7-817c-db14aaf6d08d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dlvpk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-9nmfv              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-proxy-tk6f6           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 2m42s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-190751-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-190751-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-190751-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-190751-m04 event: Registered Node ha-190751-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-190751-m04 event: Registered Node ha-190751-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-190751-m04 event: Registered Node ha-190751-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-190751-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m57s                  node-controller  Node ha-190751-m04 event: Registered Node ha-190751-m04 in Controller
	  Normal   RegisteredNode           4m21s                  node-controller  Node ha-190751-m04 event: Registered Node ha-190751-m04 in Controller
	  Normal   NodeNotReady             4m17s                  node-controller  Node ha-190751-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m17s                  node-controller  Node ha-190751-m04 event: Registered Node ha-190751-m04 in Controller
	  Normal   Starting                 2m46s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m46s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m46s (x2 over 2m46s)  kubelet          Node ha-190751-m04 has been rebooted, boot id: 787b425c-db32-4bf7-817c-db14aaf6d08d
	  Normal   NodeHasSufficientMemory  2m46s (x3 over 2m46s)  kubelet          Node ha-190751-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m46s (x3 over 2m46s)  kubelet          Node ha-190751-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m46s (x3 over 2m46s)  kubelet          Node ha-190751-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             2m46s                  kubelet          Node ha-190751-m04 status is now: NodeNotReady
	  Normal   NodeReady                2m46s                  kubelet          Node ha-190751-m04 status is now: NodeReady
	  Normal   NodeNotReady             101s                   node-controller  Node ha-190751-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.291459] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.062528] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065864] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.157574] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.135658] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.243263] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.876209] systemd-fstab-generator[753]: Ignoring "noauto" option for root device
	[  +4.159219] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.061484] kauditd_printk_skb: 158 callbacks suppressed
	[ +10.191933] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.087738] kauditd_printk_skb: 79 callbacks suppressed
	[Sep16 13:38] kauditd_printk_skb: 69 callbacks suppressed
	[ +12.548550] kauditd_printk_skb: 26 callbacks suppressed
	[Sep16 13:48] systemd-fstab-generator[3437]: Ignoring "noauto" option for root device
	[  +0.154511] systemd-fstab-generator[3449]: Ignoring "noauto" option for root device
	[  +0.174882] systemd-fstab-generator[3463]: Ignoring "noauto" option for root device
	[  +0.138837] systemd-fstab-generator[3475]: Ignoring "noauto" option for root device
	[  +0.280087] systemd-fstab-generator[3503]: Ignoring "noauto" option for root device
	[  +0.689243] systemd-fstab-generator[3600]: Ignoring "noauto" option for root device
	[  +3.674650] kauditd_printk_skb: 122 callbacks suppressed
	[ +12.074803] kauditd_printk_skb: 85 callbacks suppressed
	[ +10.597115] kauditd_printk_skb: 1 callbacks suppressed
	[ +16.272570] kauditd_printk_skb: 10 callbacks suppressed
	[Sep16 13:49] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [0cd93f6d25b96fcafeadbe4368203439d003e6e60832e2405318039bac48cd90] <==
	{"level":"info","ts":"2024-09-16T13:46:40.589998Z","caller":"traceutil/trace.go:171","msg":"trace[1690657794] range","detail":"{range_begin:/registry/limitranges/; range_end:/registry/limitranges0; }","duration":"831.1199ms","start":"2024-09-16T13:46:39.758872Z","end":"2024-09-16T13:46:40.589992Z","steps":["trace[1690657794] 'agreement among raft nodes before linearized reading'  (duration: 824.44903ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T13:46:40.590019Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T13:46:39.758808Z","time spent":"831.202908ms","remote":"127.0.0.1:50254","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" limit:10000 "}
	2024/09/16 13:46:40 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-16T13:46:40.621370Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.94:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T13:46:40.621457Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.94:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T13:46:40.622533Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"c23cd90330b5fc4f","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-16T13:46:40.624102Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"cfda983678b85d00"}
	{"level":"info","ts":"2024-09-16T13:46:40.624201Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"cfda983678b85d00"}
	{"level":"info","ts":"2024-09-16T13:46:40.624224Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"cfda983678b85d00"}
	{"level":"info","ts":"2024-09-16T13:46:40.624643Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00"}
	{"level":"info","ts":"2024-09-16T13:46:40.624746Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00"}
	{"level":"info","ts":"2024-09-16T13:46:40.625118Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c23cd90330b5fc4f","remote-peer-id":"cfda983678b85d00"}
	{"level":"info","ts":"2024-09-16T13:46:40.625157Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"cfda983678b85d00"}
	{"level":"info","ts":"2024-09-16T13:46:40.625233Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"57f8f59559f02f50"}
	{"level":"info","ts":"2024-09-16T13:46:40.625311Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"57f8f59559f02f50"}
	{"level":"info","ts":"2024-09-16T13:46:40.625484Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"57f8f59559f02f50"}
	{"level":"info","ts":"2024-09-16T13:46:40.625602Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c23cd90330b5fc4f","remote-peer-id":"57f8f59559f02f50"}
	{"level":"info","ts":"2024-09-16T13:46:40.625705Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c23cd90330b5fc4f","remote-peer-id":"57f8f59559f02f50"}
	{"level":"info","ts":"2024-09-16T13:46:40.625878Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c23cd90330b5fc4f","remote-peer-id":"57f8f59559f02f50"}
	{"level":"info","ts":"2024-09-16T13:46:40.625931Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"57f8f59559f02f50"}
	{"level":"info","ts":"2024-09-16T13:46:40.632191Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.94:2380"}
	{"level":"warn","ts":"2024-09-16T13:46:40.632287Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.896036216s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-16T13:46:40.632542Z","caller":"traceutil/trace.go:171","msg":"trace[226656043] range","detail":"{range_begin:; range_end:; }","duration":"8.896304507s","start":"2024-09-16T13:46:31.736230Z","end":"2024-09-16T13:46:40.632535Z","steps":["trace[226656043] 'agreement among raft nodes before linearized reading'  (duration: 8.896035567s)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T13:46:40.632463Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.94:2380"}
	{"level":"info","ts":"2024-09-16T13:46:40.632634Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-190751","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.94:2380"],"advertise-client-urls":["https://192.168.39.94:2379"]}
	
	
	==> etcd [56e43e1330c7a560e64d2d1d8d2047c7993487a6de8d12b05d4867bc2484e09d] <==
	{"level":"warn","ts":"2024-09-16T13:50:33.361935Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"57f8f59559f02f50","rtt":"0s","error":"dial tcp 192.168.39.134:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T13:51:14.632021Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.505998ms","expected-duration":"100ms","prefix":"","request":"header:<ID:18180910728332811424 > lease_revoke:<id:2f5091fb196c9d8d>","response":"size:29"}
	{"level":"warn","ts":"2024-09-16T13:51:14.632744Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.1098ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T13:51:14.632817Z","caller":"traceutil/trace.go:171","msg":"trace[218423233] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2542; }","duration":"118.173876ms","start":"2024-09-16T13:51:14.514616Z","end":"2024-09-16T13:51:14.632790Z","steps":["trace[218423233] 'range keys from in-memory index tree'  (duration: 118.1038ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T13:51:16.726921Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.034074ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-ha-190751-m02\" ","response":"range_response_count:1 size:4326"}
	{"level":"info","ts":"2024-09-16T13:51:16.727023Z","caller":"traceutil/trace.go:171","msg":"trace[1118241631] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-ha-190751-m02; range_end:; response_count:1; response_revision:2553; }","duration":"139.235457ms","start":"2024-09-16T13:51:16.587768Z","end":"2024-09-16T13:51:16.727003Z","steps":["trace[1118241631] 'range keys from in-memory index tree'  (duration: 138.102768ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T13:51:23.905763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f switched to configuration voters=(13996300349686021199 14977450870495010048)"}
	{"level":"info","ts":"2024-09-16T13:51:23.907745Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"f81fab91992620a9","local-member-id":"c23cd90330b5fc4f","removed-remote-peer-id":"57f8f59559f02f50","removed-remote-peer-urls":["https://192.168.39.134:2380"]}
	{"level":"info","ts":"2024-09-16T13:51:23.907924Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"57f8f59559f02f50"}
	{"level":"warn","ts":"2024-09-16T13:51:23.907997Z","caller":"etcdserver/server.go:987","msg":"rejected Raft message from removed member","local-member-id":"c23cd90330b5fc4f","removed-member-id":"57f8f59559f02f50"}
	{"level":"warn","ts":"2024-09-16T13:51:23.908092Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2024-09-16T13:51:23.908290Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"57f8f59559f02f50"}
	{"level":"info","ts":"2024-09-16T13:51:23.908338Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"57f8f59559f02f50"}
	{"level":"warn","ts":"2024-09-16T13:51:23.908674Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"57f8f59559f02f50"}
	{"level":"info","ts":"2024-09-16T13:51:23.908782Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"57f8f59559f02f50"}
	{"level":"info","ts":"2024-09-16T13:51:23.908998Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c23cd90330b5fc4f","remote-peer-id":"57f8f59559f02f50"}
	{"level":"warn","ts":"2024-09-16T13:51:23.909283Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c23cd90330b5fc4f","remote-peer-id":"57f8f59559f02f50","error":"context canceled"}
	{"level":"warn","ts":"2024-09-16T13:51:23.909553Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"57f8f59559f02f50","error":"failed to read 57f8f59559f02f50 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-16T13:51:23.909620Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c23cd90330b5fc4f","remote-peer-id":"57f8f59559f02f50"}
	{"level":"warn","ts":"2024-09-16T13:51:23.909774Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"c23cd90330b5fc4f","remote-peer-id":"57f8f59559f02f50","error":"context canceled"}
	{"level":"info","ts":"2024-09-16T13:51:23.909816Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c23cd90330b5fc4f","remote-peer-id":"57f8f59559f02f50"}
	{"level":"info","ts":"2024-09-16T13:51:23.909939Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"57f8f59559f02f50"}
	{"level":"info","ts":"2024-09-16T13:51:23.909999Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"c23cd90330b5fc4f","removed-remote-peer-id":"57f8f59559f02f50"}
	{"level":"warn","ts":"2024-09-16T13:51:23.924057Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.134:56070","server-name":"","error":"read tcp 192.168.39.94:2380->192.168.39.134:56070: read: connection reset by peer"}
	{"level":"warn","ts":"2024-09-16T13:51:23.924343Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.134:56060","server-name":"","error":"read tcp 192.168.39.94:2380->192.168.39.134:56060: read: connection reset by peer"}
	
	
	==> kernel <==
	 13:53:57 up 16 min,  0 users,  load average: 0.17, 0.34, 0.30
	Linux ha-190751 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [876c9f45c384802a996dd22d917975d86b875cbde33520b6bfb8ec6f84b39629] <==
	I0916 13:46:03.329387       1 main.go:322] Node ha-190751-m02 has CIDR [10.244.1.0/24] 
	I0916 13:46:13.330372       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0916 13:46:13.330536       1 main.go:299] handling current node
	I0916 13:46:13.330576       1 main.go:295] Handling node with IPs: map[192.168.39.192:{}]
	I0916 13:46:13.330597       1 main.go:322] Node ha-190751-m02 has CIDR [10.244.1.0/24] 
	I0916 13:46:13.330748       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0916 13:46:13.330770       1 main.go:322] Node ha-190751-m03 has CIDR [10.244.2.0/24] 
	I0916 13:46:13.330910       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0916 13:46:13.330937       1 main.go:322] Node ha-190751-m04 has CIDR [10.244.3.0/24] 
	I0916 13:46:23.330925       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0916 13:46:23.331009       1 main.go:299] handling current node
	I0916 13:46:23.331037       1 main.go:295] Handling node with IPs: map[192.168.39.192:{}]
	I0916 13:46:23.331055       1 main.go:322] Node ha-190751-m02 has CIDR [10.244.1.0/24] 
	I0916 13:46:23.331212       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0916 13:46:23.331233       1 main.go:322] Node ha-190751-m03 has CIDR [10.244.2.0/24] 
	I0916 13:46:23.331286       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0916 13:46:23.331304       1 main.go:322] Node ha-190751-m04 has CIDR [10.244.3.0/24] 
	I0916 13:46:33.331747       1 main.go:295] Handling node with IPs: map[192.168.39.192:{}]
	I0916 13:46:33.331911       1 main.go:322] Node ha-190751-m02 has CIDR [10.244.1.0/24] 
	I0916 13:46:33.332076       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0916 13:46:33.332145       1 main.go:322] Node ha-190751-m03 has CIDR [10.244.2.0/24] 
	I0916 13:46:33.332275       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0916 13:46:33.332300       1 main.go:322] Node ha-190751-m04 has CIDR [10.244.3.0/24] 
	I0916 13:46:33.332372       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0916 13:46:33.332391       1 main.go:299] handling current node
	
	
	==> kindnet [aff424a1e0a36fa17a28c65bed39b131cd77e229d6a5125231b41cedffa463c9] <==
	I0916 13:53:09.168799       1 main.go:299] handling current node
	I0916 13:53:19.167810       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0916 13:53:19.167911       1 main.go:322] Node ha-190751-m04 has CIDR [10.244.3.0/24] 
	I0916 13:53:19.168056       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0916 13:53:19.168079       1 main.go:299] handling current node
	I0916 13:53:19.168108       1 main.go:295] Handling node with IPs: map[192.168.39.192:{}]
	I0916 13:53:19.168112       1 main.go:322] Node ha-190751-m02 has CIDR [10.244.1.0/24] 
	I0916 13:53:29.170942       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0916 13:53:29.171005       1 main.go:299] handling current node
	I0916 13:53:29.171042       1 main.go:295] Handling node with IPs: map[192.168.39.192:{}]
	I0916 13:53:29.171048       1 main.go:322] Node ha-190751-m02 has CIDR [10.244.1.0/24] 
	I0916 13:53:29.171159       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0916 13:53:29.171180       1 main.go:322] Node ha-190751-m04 has CIDR [10.244.3.0/24] 
	I0916 13:53:39.167189       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0916 13:53:39.167351       1 main.go:322] Node ha-190751-m04 has CIDR [10.244.3.0/24] 
	I0916 13:53:39.167544       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0916 13:53:39.167572       1 main.go:299] handling current node
	I0916 13:53:39.167603       1 main.go:295] Handling node with IPs: map[192.168.39.192:{}]
	I0916 13:53:39.167629       1 main.go:322] Node ha-190751-m02 has CIDR [10.244.1.0/24] 
	I0916 13:53:49.172343       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0916 13:53:49.172519       1 main.go:299] handling current node
	I0916 13:53:49.172569       1 main.go:295] Handling node with IPs: map[192.168.39.192:{}]
	I0916 13:53:49.172589       1 main.go:322] Node ha-190751-m02 has CIDR [10.244.1.0/24] 
	I0916 13:53:49.172816       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0916 13:53:49.172934       1 main.go:322] Node ha-190751-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8d9edc7df5a2360bfab4a65fc63e9ce882e388f183f784c2b4e126b6614717bb] <==
	I0916 13:49:05.790662       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 13:49:05.880151       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 13:49:05.880184       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 13:49:05.881399       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 13:49:05.881681       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 13:49:05.884128       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 13:49:05.884200       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 13:49:05.884316       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 13:49:05.897919       1 aggregator.go:171] initial CRD sync complete...
	I0916 13:49:05.897965       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 13:49:05.897971       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 13:49:05.897976       1 cache.go:39] Caches are synced for autoregister controller
	I0916 13:49:05.906509       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 13:49:05.907397       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 13:49:05.913327       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 13:49:05.920996       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 13:49:05.921032       1 policy_source.go:224] refreshing policies
	W0916 13:49:05.924737       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.134 192.168.39.192]
	I0916 13:49:05.926537       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 13:49:05.939384       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0916 13:49:05.942540       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0916 13:49:06.006891       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 13:49:06.789288       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0916 13:49:07.056917       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.192 192.168.39.94]
	W0916 13:51:37.064733       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.192 192.168.39.94]
	
	
	==> kube-apiserver [d509e2938a032069254cbcb0c924947c72c27bc23984e04701fbe6caef46adad] <==
	I0916 13:48:18.240271       1 options.go:228] external host was not specified, using 192.168.39.94
	I0916 13:48:18.244399       1 server.go:142] Version: v1.31.1
	I0916 13:48:18.244462       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 13:48:19.289487       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0916 13:48:19.300011       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 13:48:19.303912       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0916 13:48:19.305870       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0916 13:48:19.306259       1 instance.go:232] Using reconciler: lease
	W0916 13:48:39.288638       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0916 13:48:39.288639       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0916 13:48:39.306944       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [19c8c831977cdcd20f220c683022ec7858cf50dbcd786c60fdc6155f6bc7eb81] <==
	I0916 13:48:50.770107       1 serving.go:386] Generated self-signed cert in-memory
	I0916 13:48:51.051966       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0916 13:48:51.052003       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 13:48:51.053274       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 13:48:51.053502       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0916 13:48:51.053509       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0916 13:48:51.053529       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0916 13:49:01.055901       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.94:8443/healthz\": dial tcp 192.168.39.94:8443: connect: connection refused"
	
	
	==> kube-controller-manager [5e02e885f6ff0ac9a43a7b7198e00c6c903eded4e3272b993d0acc2558a5663a] <==
	I0916 13:51:23.806756       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.938087ms"
	I0916 13:51:23.807028       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.385µs"
	I0916 13:51:34.876927       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-190751-m04"
	I0916 13:51:34.877337       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m03"
	E0916 13:51:36.078455       1 gc_controller.go:151] "Failed to get node" err="node \"ha-190751-m03\" not found" logger="pod-garbage-collector-controller" node="ha-190751-m03"
	E0916 13:51:36.078494       1 gc_controller.go:151] "Failed to get node" err="node \"ha-190751-m03\" not found" logger="pod-garbage-collector-controller" node="ha-190751-m03"
	E0916 13:51:36.078500       1 gc_controller.go:151] "Failed to get node" err="node \"ha-190751-m03\" not found" logger="pod-garbage-collector-controller" node="ha-190751-m03"
	E0916 13:51:36.078506       1 gc_controller.go:151] "Failed to get node" err="node \"ha-190751-m03\" not found" logger="pod-garbage-collector-controller" node="ha-190751-m03"
	E0916 13:51:36.078511       1 gc_controller.go:151] "Failed to get node" err="node \"ha-190751-m03\" not found" logger="pod-garbage-collector-controller" node="ha-190751-m03"
	E0916 13:51:56.079713       1 gc_controller.go:151] "Failed to get node" err="node \"ha-190751-m03\" not found" logger="pod-garbage-collector-controller" node="ha-190751-m03"
	E0916 13:51:56.079782       1 gc_controller.go:151] "Failed to get node" err="node \"ha-190751-m03\" not found" logger="pod-garbage-collector-controller" node="ha-190751-m03"
	E0916 13:51:56.079790       1 gc_controller.go:151] "Failed to get node" err="node \"ha-190751-m03\" not found" logger="pod-garbage-collector-controller" node="ha-190751-m03"
	E0916 13:51:56.079795       1 gc_controller.go:151] "Failed to get node" err="node \"ha-190751-m03\" not found" logger="pod-garbage-collector-controller" node="ha-190751-m03"
	E0916 13:51:56.079800       1 gc_controller.go:151] "Failed to get node" err="node \"ha-190751-m03\" not found" logger="pod-garbage-collector-controller" node="ha-190751-m03"
	I0916 13:52:16.058268       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	E0916 13:52:16.080234       1 gc_controller.go:151] "Failed to get node" err="node \"ha-190751-m03\" not found" logger="pod-garbage-collector-controller" node="ha-190751-m03"
	E0916 13:52:16.080287       1 gc_controller.go:151] "Failed to get node" err="node \"ha-190751-m03\" not found" logger="pod-garbage-collector-controller" node="ha-190751-m03"
	E0916 13:52:16.080298       1 gc_controller.go:151] "Failed to get node" err="node \"ha-190751-m03\" not found" logger="pod-garbage-collector-controller" node="ha-190751-m03"
	E0916 13:52:16.080309       1 gc_controller.go:151] "Failed to get node" err="node \"ha-190751-m03\" not found" logger="pod-garbage-collector-controller" node="ha-190751-m03"
	E0916 13:52:16.080314       1 gc_controller.go:151] "Failed to get node" err="node \"ha-190751-m03\" not found" logger="pod-garbage-collector-controller" node="ha-190751-m03"
	I0916 13:52:16.087193       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:52:16.160197       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="33.902293ms"
	I0916 13:52:16.160368       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.39µs"
	I0916 13:52:16.178773       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	I0916 13:52:21.199011       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-190751-m04"
	
	
	==> kube-proxy [d2fb4efd07b928023ce922b08d4d29585e3080441cdb212649ac1338243874ee] <==
	E0916 13:45:35.743342       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 13:45:38.799129       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-190751&resourceVersion=1809": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 13:45:38.799328       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-190751&resourceVersion=1809\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 13:45:38.799503       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 13:45:38.799575       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 13:45:38.799697       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 13:45:38.799777       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 13:45:44.944023       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-190751&resourceVersion=1809": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 13:45:44.944104       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-190751&resourceVersion=1809\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 13:45:44.944198       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 13:45:44.944232       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 13:45:44.944378       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 13:45:44.944415       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 13:45:57.230707       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 13:45:57.231068       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 13:45:57.231306       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-190751&resourceVersion=1809": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 13:45:57.231398       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-190751&resourceVersion=1809\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 13:46:00.302389       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 13:46:00.302579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 13:46:18.735179       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-190751&resourceVersion=1809": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 13:46:18.735288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-190751&resourceVersion=1809\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 13:46:21.807716       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 13:46:21.808160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 13:46:24.878689       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 13:46:24.878920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [d9d5a75c9054b2414f4c5763f394eb7a72f95e0360e67bf55e3b3ded96ccbd6e] <==
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 13:48:21.615188       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-190751\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0916 13:48:24.688312       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-190751\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0916 13:48:27.758262       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-190751\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0916 13:48:33.902899       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-190751\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0916 13:48:43.118561       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-190751\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0916 13:49:01.550524       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-190751\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0916 13:49:01.550585       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0916 13:49:01.550651       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 13:49:01.585204       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 13:49:01.585283       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 13:49:01.585308       1 server_linux.go:169] "Using iptables Proxier"
	I0916 13:49:01.587678       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 13:49:01.588079       1 server.go:483] "Version info" version="v1.31.1"
	I0916 13:49:01.588114       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 13:49:01.590228       1 config.go:199] "Starting service config controller"
	I0916 13:49:01.590270       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 13:49:01.590292       1 config.go:105] "Starting endpoint slice config controller"
	I0916 13:49:01.590295       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 13:49:01.590878       1 config.go:328] "Starting node config controller"
	I0916 13:49:01.590904       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 13:49:03.590698       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 13:49:03.590763       1 shared_informer.go:320] Caches are synced for service config
	I0916 13:49:03.590976       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3d2fdc916e364191824e8eeeeebd2bd4bde311ec642553730ff1fa83d5ae6b3c] <==
	E0916 13:37:35.232941       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 13:37:36.647896       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 13:40:46.111447       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-v4ngc\": pod kube-proxy-v4ngc is already assigned to node \"ha-190751-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-v4ngc" node="ha-190751-m04"
	E0916 13:40:46.111635       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1bfac972-00f2-440b-8577-132ebf2ef8fa(kube-system/kube-proxy-v4ngc) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-v4ngc"
	E0916 13:40:46.111674       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-v4ngc\": pod kube-proxy-v4ngc is already assigned to node \"ha-190751-m04\"" pod="kube-system/kube-proxy-v4ngc"
	I0916 13:40:46.111701       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-v4ngc" node="ha-190751-m04"
	E0916 13:40:46.136509       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-9nmfv\": pod kindnet-9nmfv is already assigned to node \"ha-190751-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-9nmfv" node="ha-190751-m04"
	E0916 13:40:46.136581       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a53af4e2-ffdc-4e32-8f97-f0b2684145be(kube-system/kindnet-9nmfv) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-9nmfv"
	E0916 13:40:46.136599       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-9nmfv\": pod kindnet-9nmfv is already assigned to node \"ha-190751-m04\"" pod="kube-system/kindnet-9nmfv"
	I0916 13:40:46.136617       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-9nmfv" node="ha-190751-m04"
	E0916 13:46:32.299016       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0916 13:46:32.673881       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0916 13:46:33.207918       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0916 13:46:34.163140       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0916 13:46:34.525037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0916 13:46:35.918803       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0916 13:46:35.998358       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0916 13:46:36.664235       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0916 13:46:37.506907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0916 13:46:38.862817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0916 13:46:39.672670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0916 13:46:39.958770       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	I0916 13:46:40.565675       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0916 13:46:40.565744       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0916 13:46:40.567398       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f88a0c0f7b2943ff725145bc499f835202476a9fca62dec354a893db03f49b8f] <==
	W0916 13:48:57.716581       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.94:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.94:8443: connect: connection refused
	E0916 13:48:57.716686       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.94:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.94:8443: connect: connection refused" logger="UnhandledError"
	W0916 13:48:58.088358       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.94:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.94:8443: connect: connection refused
	E0916 13:48:58.088427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.94:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.94:8443: connect: connection refused" logger="UnhandledError"
	W0916 13:48:58.600434       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.94:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.94:8443: connect: connection refused
	E0916 13:48:58.600521       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.94:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.94:8443: connect: connection refused" logger="UnhandledError"
	W0916 13:48:58.987804       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.94:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.94:8443: connect: connection refused
	E0916 13:48:58.987943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.94:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.94:8443: connect: connection refused" logger="UnhandledError"
	W0916 13:48:59.738158       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.94:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.94:8443: connect: connection refused
	E0916 13:48:59.738266       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.94:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.94:8443: connect: connection refused" logger="UnhandledError"
	W0916 13:49:00.042343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.94:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.94:8443: connect: connection refused
	E0916 13:49:00.042416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.94:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.94:8443: connect: connection refused" logger="UnhandledError"
	W0916 13:49:00.269741       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.94:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.94:8443: connect: connection refused
	E0916 13:49:00.269942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.94:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.94:8443: connect: connection refused" logger="UnhandledError"
	W0916 13:49:00.515438       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.94:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.94:8443: connect: connection refused
	E0916 13:49:00.515502       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.94:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.94:8443: connect: connection refused" logger="UnhandledError"
	W0916 13:49:00.627437       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.94:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.94:8443: connect: connection refused
	E0916 13:49:00.627513       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.94:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.94:8443: connect: connection refused" logger="UnhandledError"
	W0916 13:49:01.322481       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.94:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.94:8443: connect: connection refused
	E0916 13:49:01.322544       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.94:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.94:8443: connect: connection refused" logger="UnhandledError"
	W0916 13:49:01.516424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.94:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.94:8443: connect: connection refused
	E0916 13:49:01.516482       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.94:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.94:8443: connect: connection refused" logger="UnhandledError"
	W0916 13:49:02.546791       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.94:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.94:8443: connect: connection refused
	E0916 13:49:02.546911       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.94:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.94:8443: connect: connection refused" logger="UnhandledError"
	I0916 13:49:18.222793       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 13:52:39 ha-190751 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 13:52:39 ha-190751 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 13:52:39 ha-190751 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 13:52:39 ha-190751 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 13:52:39 ha-190751 kubelet[1315]: E0916 13:52:39.881248    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494759880755102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:52:39 ha-190751 kubelet[1315]: E0916 13:52:39.881270    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494759880755102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:52:49 ha-190751 kubelet[1315]: E0916 13:52:49.883359    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494769883038691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:52:49 ha-190751 kubelet[1315]: E0916 13:52:49.883391    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494769883038691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:52:59 ha-190751 kubelet[1315]: E0916 13:52:59.885739    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494779885233555,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:52:59 ha-190751 kubelet[1315]: E0916 13:52:59.886317    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494779885233555,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:53:09 ha-190751 kubelet[1315]: E0916 13:53:09.890250    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494789889115699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:53:09 ha-190751 kubelet[1315]: E0916 13:53:09.890389    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494789889115699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:53:19 ha-190751 kubelet[1315]: E0916 13:53:19.892399    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494799891957982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:53:19 ha-190751 kubelet[1315]: E0916 13:53:19.892719    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494799891957982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:53:29 ha-190751 kubelet[1315]: E0916 13:53:29.896354    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494809895091035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:53:29 ha-190751 kubelet[1315]: E0916 13:53:29.896797    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494809895091035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:53:39 ha-190751 kubelet[1315]: E0916 13:53:39.648346    1315 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 13:53:39 ha-190751 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 13:53:39 ha-190751 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 13:53:39 ha-190751 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 13:53:39 ha-190751 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 13:53:39 ha-190751 kubelet[1315]: E0916 13:53:39.898557    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494819898194345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:53:39 ha-190751 kubelet[1315]: E0916 13:53:39.898583    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494819898194345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:53:49 ha-190751 kubelet[1315]: E0916 13:53:49.905966    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494829901134144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 13:53:49 ha-190751 kubelet[1315]: E0916 13:53:49.906055    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726494829901134144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 13:53:56.708303  743657 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19652-713072/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-190751 -n ha-190751
helpers_test.go:261: (dbg) Run:  kubectl --context ha-190751 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.65s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (322.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-561755
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-561755
E0916 14:10:50.206939  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-561755: exit status 82 (2m1.921747s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-561755-m03"  ...
	* Stopping node "multinode-561755-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-561755" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-561755 --wait=true -v=8 --alsologtostderr
E0916 14:13:53.277362  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
E0916 14:15:50.207046  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-561755 --wait=true -v=8 --alsologtostderr: (3m18.074568017s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-561755
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-561755 -n multinode-561755
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-561755 logs -n 25: (1.396279877s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-561755 ssh -n                                                                 | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:09 UTC | 16 Sep 24 14:09 UTC |
	|         | multinode-561755-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-561755 cp multinode-561755-m02:/home/docker/cp-test.txt                       | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:09 UTC | 16 Sep 24 14:09 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1710468598/001/cp-test_multinode-561755-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-561755 ssh -n                                                                 | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:09 UTC | 16 Sep 24 14:09 UTC |
	|         | multinode-561755-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-561755 cp multinode-561755-m02:/home/docker/cp-test.txt                       | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:09 UTC | 16 Sep 24 14:09 UTC |
	|         | multinode-561755:/home/docker/cp-test_multinode-561755-m02_multinode-561755.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-561755 ssh -n                                                                 | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:09 UTC | 16 Sep 24 14:09 UTC |
	|         | multinode-561755-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-561755 ssh -n multinode-561755 sudo cat                                       | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:09 UTC | 16 Sep 24 14:09 UTC |
	|         | /home/docker/cp-test_multinode-561755-m02_multinode-561755.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-561755 cp multinode-561755-m02:/home/docker/cp-test.txt                       | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:09 UTC | 16 Sep 24 14:09 UTC |
	|         | multinode-561755-m03:/home/docker/cp-test_multinode-561755-m02_multinode-561755-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-561755 ssh -n                                                                 | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:09 UTC | 16 Sep 24 14:09 UTC |
	|         | multinode-561755-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-561755 ssh -n multinode-561755-m03 sudo cat                                   | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:09 UTC | 16 Sep 24 14:10 UTC |
	|         | /home/docker/cp-test_multinode-561755-m02_multinode-561755-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-561755 cp testdata/cp-test.txt                                                | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | multinode-561755-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-561755 ssh -n                                                                 | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | multinode-561755-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-561755 cp multinode-561755-m03:/home/docker/cp-test.txt                       | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1710468598/001/cp-test_multinode-561755-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-561755 ssh -n                                                                 | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | multinode-561755-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-561755 cp multinode-561755-m03:/home/docker/cp-test.txt                       | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | multinode-561755:/home/docker/cp-test_multinode-561755-m03_multinode-561755.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-561755 ssh -n                                                                 | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | multinode-561755-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-561755 ssh -n multinode-561755 sudo cat                                       | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | /home/docker/cp-test_multinode-561755-m03_multinode-561755.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-561755 cp multinode-561755-m03:/home/docker/cp-test.txt                       | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | multinode-561755-m02:/home/docker/cp-test_multinode-561755-m03_multinode-561755-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-561755 ssh -n                                                                 | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | multinode-561755-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-561755 ssh -n multinode-561755-m02 sudo cat                                   | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | /home/docker/cp-test_multinode-561755-m03_multinode-561755-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-561755 node stop m03                                                          | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	| node    | multinode-561755 node start                                                             | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-561755                                                                | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC |                     |
	| stop    | -p multinode-561755                                                                     | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC |                     |
	| start   | -p multinode-561755                                                                     | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:12 UTC | 16 Sep 24 14:16 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-561755                                                                | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:16 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 14:12:44
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 14:12:44.162099  753338 out.go:345] Setting OutFile to fd 1 ...
	I0916 14:12:44.162247  753338 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 14:12:44.162258  753338 out.go:358] Setting ErrFile to fd 2...
	I0916 14:12:44.162264  753338 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 14:12:44.162438  753338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
	I0916 14:12:44.163012  753338 out.go:352] Setting JSON to false
	I0916 14:12:44.164014  753338 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":14113,"bootTime":1726481851,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 14:12:44.164119  753338 start.go:139] virtualization: kvm guest
	I0916 14:12:44.166711  753338 out.go:177] * [multinode-561755] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 14:12:44.168115  753338 out.go:177]   - MINIKUBE_LOCATION=19652
	I0916 14:12:44.168104  753338 notify.go:220] Checking for updates...
	I0916 14:12:44.170624  753338 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 14:12:44.171919  753338 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19652-713072/kubeconfig
	I0916 14:12:44.173303  753338 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 14:12:44.174801  753338 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 14:12:44.176199  753338 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 14:12:44.177727  753338 config.go:182] Loaded profile config "multinode-561755": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 14:12:44.177841  753338 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 14:12:44.178493  753338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 14:12:44.178541  753338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 14:12:44.194766  753338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38723
	I0916 14:12:44.195320  753338 main.go:141] libmachine: () Calling .GetVersion
	I0916 14:12:44.195969  753338 main.go:141] libmachine: Using API Version  1
	I0916 14:12:44.195990  753338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 14:12:44.196349  753338 main.go:141] libmachine: () Calling .GetMachineName
	I0916 14:12:44.196550  753338 main.go:141] libmachine: (multinode-561755) Calling .DriverName
	I0916 14:12:44.233058  753338 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 14:12:44.234212  753338 start.go:297] selected driver: kvm2
	I0916 14:12:44.234229  753338 start.go:901] validating driver "kvm2" against &{Name:multinode-561755 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-561755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.34 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.132 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 14:12:44.234350  753338 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 14:12:44.234707  753338 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 14:12:44.234783  753338 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19652-713072/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 14:12:44.249588  753338 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 14:12:44.250252  753338 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 14:12:44.250287  753338 cni.go:84] Creating CNI manager for ""
	I0916 14:12:44.250351  753338 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0916 14:12:44.250412  753338 start.go:340] cluster config:
	{Name:multinode-561755 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-561755 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.34 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.132 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 14:12:44.250546  753338 iso.go:125] acquiring lock: {Name:mk66d96ffbd424a8ca76a8604dfbe200d58305de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 14:12:44.252843  753338 out.go:177] * Starting "multinode-561755" primary control-plane node in "multinode-561755" cluster
	I0916 14:12:44.254031  753338 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 14:12:44.254072  753338 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 14:12:44.254081  753338 cache.go:56] Caching tarball of preloaded images
	I0916 14:12:44.254152  753338 preload.go:172] Found /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 14:12:44.254162  753338 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 14:12:44.254271  753338 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/multinode-561755/config.json ...
	I0916 14:12:44.254489  753338 start.go:360] acquireMachinesLock for multinode-561755: {Name:mke8f8f8ba61009cdea7a3d88b50b9f6ae6e1362 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 14:12:44.254527  753338 start.go:364] duration metric: took 21.927µs to acquireMachinesLock for "multinode-561755"
	I0916 14:12:44.254541  753338 start.go:96] Skipping create...Using existing machine configuration
	I0916 14:12:44.254546  753338 fix.go:54] fixHost starting: 
	I0916 14:12:44.254790  753338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 14:12:44.254828  753338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 14:12:44.268740  753338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38577
	I0916 14:12:44.269193  753338 main.go:141] libmachine: () Calling .GetVersion
	I0916 14:12:44.269715  753338 main.go:141] libmachine: Using API Version  1
	I0916 14:12:44.269743  753338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 14:12:44.270044  753338 main.go:141] libmachine: () Calling .GetMachineName
	I0916 14:12:44.270217  753338 main.go:141] libmachine: (multinode-561755) Calling .DriverName
	I0916 14:12:44.270333  753338 main.go:141] libmachine: (multinode-561755) Calling .GetState
	I0916 14:12:44.271638  753338 fix.go:112] recreateIfNeeded on multinode-561755: state=Running err=<nil>
	W0916 14:12:44.271669  753338 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 14:12:44.273466  753338 out.go:177] * Updating the running kvm2 "multinode-561755" VM ...
	I0916 14:12:44.274606  753338 machine.go:93] provisionDockerMachine start ...
	I0916 14:12:44.274632  753338 main.go:141] libmachine: (multinode-561755) Calling .DriverName
	I0916 14:12:44.274806  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHHostname
	I0916 14:12:44.277182  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.277649  753338 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:12:44.277720  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.277784  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHPort
	I0916 14:12:44.277945  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:12:44.278099  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:12:44.278208  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHUsername
	I0916 14:12:44.278349  753338 main.go:141] libmachine: Using SSH client type: native
	I0916 14:12:44.278558  753338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0916 14:12:44.278573  753338 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 14:12:44.394681  753338 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-561755
	
	I0916 14:12:44.394707  753338 main.go:141] libmachine: (multinode-561755) Calling .GetMachineName
	I0916 14:12:44.394992  753338 buildroot.go:166] provisioning hostname "multinode-561755"
	I0916 14:12:44.395026  753338 main.go:141] libmachine: (multinode-561755) Calling .GetMachineName
	I0916 14:12:44.395219  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHHostname
	I0916 14:12:44.397689  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.398075  753338 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:12:44.398102  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.398256  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHPort
	I0916 14:12:44.398426  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:12:44.398578  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:12:44.398698  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHUsername
	I0916 14:12:44.398844  753338 main.go:141] libmachine: Using SSH client type: native
	I0916 14:12:44.399023  753338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0916 14:12:44.399040  753338 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-561755 && echo "multinode-561755" | sudo tee /etc/hostname
	I0916 14:12:44.529742  753338 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-561755
	
	I0916 14:12:44.529772  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHHostname
	I0916 14:12:44.532199  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.532633  753338 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:12:44.532666  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.532794  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHPort
	I0916 14:12:44.532987  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:12:44.533129  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:12:44.533279  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHUsername
	I0916 14:12:44.533422  753338 main.go:141] libmachine: Using SSH client type: native
	I0916 14:12:44.533593  753338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0916 14:12:44.533608  753338 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-561755' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-561755/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-561755' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 14:12:44.646883  753338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 14:12:44.646940  753338 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19652-713072/.minikube CaCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19652-713072/.minikube}
	I0916 14:12:44.646983  753338 buildroot.go:174] setting up certificates
	I0916 14:12:44.647001  753338 provision.go:84] configureAuth start
	I0916 14:12:44.647018  753338 main.go:141] libmachine: (multinode-561755) Calling .GetMachineName
	I0916 14:12:44.647320  753338 main.go:141] libmachine: (multinode-561755) Calling .GetIP
	I0916 14:12:44.650086  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.650383  753338 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:12:44.650403  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.650587  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHHostname
	I0916 14:12:44.652528  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.652803  753338 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:12:44.652833  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.652942  753338 provision.go:143] copyHostCerts
	I0916 14:12:44.652985  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem
	I0916 14:12:44.653037  753338 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem, removing ...
	I0916 14:12:44.653050  753338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem
	I0916 14:12:44.653128  753338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem (1679 bytes)
	I0916 14:12:44.653245  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem
	I0916 14:12:44.653271  753338 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem, removing ...
	I0916 14:12:44.653279  753338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem
	I0916 14:12:44.653317  753338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem (1082 bytes)
	I0916 14:12:44.653463  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem
	I0916 14:12:44.653490  753338 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem, removing ...
	I0916 14:12:44.653500  753338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem
	I0916 14:12:44.653538  753338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem (1123 bytes)
	I0916 14:12:44.653656  753338 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem org=jenkins.multinode-561755 san=[127.0.0.1 192.168.39.163 localhost minikube multinode-561755]
	I0916 14:12:44.768791  753338 provision.go:177] copyRemoteCerts
	I0916 14:12:44.768870  753338 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 14:12:44.768898  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHHostname
	I0916 14:12:44.771831  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.772265  753338 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:12:44.772307  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.772484  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHPort
	I0916 14:12:44.772694  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:12:44.772851  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHUsername
	I0916 14:12:44.772972  753338 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/multinode-561755/id_rsa Username:docker}
	I0916 14:12:44.859977  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 14:12:44.860064  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 14:12:44.885259  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 14:12:44.885358  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0916 14:12:44.909952  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 14:12:44.910021  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 14:12:44.933833  753338 provision.go:87] duration metric: took 286.813153ms to configureAuth
	I0916 14:12:44.933869  753338 buildroot.go:189] setting minikube options for container-runtime
	I0916 14:12:44.934307  753338 config.go:182] Loaded profile config "multinode-561755": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 14:12:44.934408  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHHostname
	I0916 14:12:44.937271  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.937663  753338 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:12:44.937704  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.937958  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHPort
	I0916 14:12:44.938171  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:12:44.938335  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:12:44.938473  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHUsername
	I0916 14:12:44.938624  753338 main.go:141] libmachine: Using SSH client type: native
	I0916 14:12:44.938834  753338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0916 14:12:44.938855  753338 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 14:14:15.585702  753338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 14:14:15.585752  753338 machine.go:96] duration metric: took 1m31.311122005s to provisionDockerMachine
	I0916 14:14:15.585768  753338 start.go:293] postStartSetup for "multinode-561755" (driver="kvm2")
	I0916 14:14:15.585822  753338 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 14:14:15.585849  753338 main.go:141] libmachine: (multinode-561755) Calling .DriverName
	I0916 14:14:15.586254  753338 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 14:14:15.586285  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHHostname
	I0916 14:14:15.589701  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:14:15.590099  753338 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:14:15.590120  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:14:15.590310  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHPort
	I0916 14:14:15.590504  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:14:15.590684  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHUsername
	I0916 14:14:15.590844  753338 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/multinode-561755/id_rsa Username:docker}
	I0916 14:14:15.684393  753338 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 14:14:15.689309  753338 command_runner.go:130] > NAME=Buildroot
	I0916 14:14:15.689331  753338 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0916 14:14:15.689343  753338 command_runner.go:130] > ID=buildroot
	I0916 14:14:15.689350  753338 command_runner.go:130] > VERSION_ID=2023.02.9
	I0916 14:14:15.689357  753338 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0916 14:14:15.689394  753338 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 14:14:15.689407  753338 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/addons for local assets ...
	I0916 14:14:15.689461  753338 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/files for local assets ...
	I0916 14:14:15.689544  753338 filesync.go:149] local asset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> 7205442.pem in /etc/ssl/certs
	I0916 14:14:15.689556  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> /etc/ssl/certs/7205442.pem
	I0916 14:14:15.689690  753338 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 14:14:15.699431  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 14:14:15.722108  753338 start.go:296] duration metric: took 136.328874ms for postStartSetup
	I0916 14:14:15.722140  753338 fix.go:56] duration metric: took 1m31.46759514s for fixHost
	I0916 14:14:15.722163  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHHostname
	I0916 14:14:15.724890  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:14:15.725262  753338 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:14:15.725285  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:14:15.725438  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHPort
	I0916 14:14:15.725637  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:14:15.725810  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:14:15.725944  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHUsername
	I0916 14:14:15.726081  753338 main.go:141] libmachine: Using SSH client type: native
	I0916 14:14:15.726244  753338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0916 14:14:15.726254  753338 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 14:14:15.837817  753338 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726496055.811109899
	
	I0916 14:14:15.837839  753338 fix.go:216] guest clock: 1726496055.811109899
	I0916 14:14:15.837846  753338 fix.go:229] Guest: 2024-09-16 14:14:15.811109899 +0000 UTC Remote: 2024-09-16 14:14:15.72214485 +0000 UTC m=+91.595923156 (delta=88.965049ms)
	I0916 14:14:15.837882  753338 fix.go:200] guest clock delta is within tolerance: 88.965049ms
	I0916 14:14:15.837887  753338 start.go:83] releasing machines lock for "multinode-561755", held for 1m31.583351981s
	I0916 14:14:15.837907  753338 main.go:141] libmachine: (multinode-561755) Calling .DriverName
	I0916 14:14:15.838173  753338 main.go:141] libmachine: (multinode-561755) Calling .GetIP
	I0916 14:14:15.840747  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:14:15.841103  753338 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:14:15.841124  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:14:15.841297  753338 main.go:141] libmachine: (multinode-561755) Calling .DriverName
	I0916 14:14:15.841815  753338 main.go:141] libmachine: (multinode-561755) Calling .DriverName
	I0916 14:14:15.841988  753338 main.go:141] libmachine: (multinode-561755) Calling .DriverName
	I0916 14:14:15.842107  753338 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 14:14:15.842153  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHHostname
	I0916 14:14:15.842237  753338 ssh_runner.go:195] Run: cat /version.json
	I0916 14:14:15.842263  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHHostname
	I0916 14:14:15.844633  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:14:15.844951  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:14:15.844982  753338 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:14:15.845005  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:14:15.845128  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHPort
	I0916 14:14:15.845295  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:14:15.845447  753338 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:14:15.845455  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHUsername
	I0916 14:14:15.845474  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:14:15.845649  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHPort
	I0916 14:14:15.845646  753338 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/multinode-561755/id_rsa Username:docker}
	I0916 14:14:15.845824  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:14:15.845960  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHUsername
	I0916 14:14:15.846086  753338 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/multinode-561755/id_rsa Username:docker}
	I0916 14:14:15.945840  753338 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 14:14:15.945891  753338 command_runner.go:130] > {"iso_version": "v1.34.0-1726415472-19646", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "7dc55c0008a982396eb57879cd4eab23ab96531e"}
	I0916 14:14:15.946031  753338 ssh_runner.go:195] Run: systemctl --version
	I0916 14:14:15.951565  753338 command_runner.go:130] > systemd 252 (252)
	I0916 14:14:15.951615  753338 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0916 14:14:15.951686  753338 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 14:14:16.106670  753338 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 14:14:16.112488  753338 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0916 14:14:16.112535  753338 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 14:14:16.112596  753338 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 14:14:16.121447  753338 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 14:14:16.121466  753338 start.go:495] detecting cgroup driver to use...
	I0916 14:14:16.121517  753338 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 14:14:16.136907  753338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 14:14:16.149990  753338 docker.go:217] disabling cri-docker service (if available) ...
	I0916 14:14:16.150023  753338 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 14:14:16.162604  753338 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 14:14:16.175139  753338 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 14:14:16.309311  753338 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 14:14:16.440113  753338 docker.go:233] disabling docker service ...
	I0916 14:14:16.440186  753338 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 14:14:16.455226  753338 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 14:14:16.468719  753338 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 14:14:16.599755  753338 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 14:14:16.739840  753338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 14:14:16.754847  753338 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 14:14:16.773075  753338 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0916 14:14:16.773127  753338 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 14:14:16.773184  753338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:14:16.783578  753338 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 14:14:16.783651  753338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:14:16.793513  753338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:14:16.803156  753338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:14:16.813484  753338 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 14:14:16.823599  753338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:14:16.833179  753338 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:14:16.843955  753338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:14:16.853607  753338 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 14:14:16.862390  753338 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 14:14:16.862446  753338 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 14:14:16.871187  753338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 14:14:17.002438  753338 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 14:14:17.185411  753338 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 14:14:17.185478  753338 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 14:14:17.190166  753338 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0916 14:14:17.190186  753338 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 14:14:17.190192  753338 command_runner.go:130] > Device: 0,22	Inode: 1328        Links: 1
	I0916 14:14:17.190199  753338 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 14:14:17.190203  753338 command_runner.go:130] > Access: 2024-09-16 14:14:17.066932209 +0000
	I0916 14:14:17.190209  753338 command_runner.go:130] > Modify: 2024-09-16 14:14:17.066932209 +0000
	I0916 14:14:17.190214  753338 command_runner.go:130] > Change: 2024-09-16 14:14:17.066932209 +0000
	I0916 14:14:17.190218  753338 command_runner.go:130] >  Birth: -
	I0916 14:14:17.190257  753338 start.go:563] Will wait 60s for crictl version
	I0916 14:14:17.190327  753338 ssh_runner.go:195] Run: which crictl
	I0916 14:14:17.194057  753338 command_runner.go:130] > /usr/bin/crictl
	I0916 14:14:17.194120  753338 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 14:14:17.233722  753338 command_runner.go:130] > Version:  0.1.0
	I0916 14:14:17.233743  753338 command_runner.go:130] > RuntimeName:  cri-o
	I0916 14:14:17.233748  753338 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0916 14:14:17.233753  753338 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 14:14:17.233957  753338 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 14:14:17.234039  753338 ssh_runner.go:195] Run: crio --version
	I0916 14:14:17.264508  753338 command_runner.go:130] > crio version 1.29.1
	I0916 14:14:17.264525  753338 command_runner.go:130] > Version:        1.29.1
	I0916 14:14:17.264532  753338 command_runner.go:130] > GitCommit:      unknown
	I0916 14:14:17.264539  753338 command_runner.go:130] > GitCommitDate:  unknown
	I0916 14:14:17.264545  753338 command_runner.go:130] > GitTreeState:   clean
	I0916 14:14:17.264558  753338 command_runner.go:130] > BuildDate:      2024-09-15T21:21:56Z
	I0916 14:14:17.264565  753338 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 14:14:17.264572  753338 command_runner.go:130] > Compiler:       gc
	I0916 14:14:17.264578  753338 command_runner.go:130] > Platform:       linux/amd64
	I0916 14:14:17.264584  753338 command_runner.go:130] > Linkmode:       dynamic
	I0916 14:14:17.264592  753338 command_runner.go:130] > BuildTags:      
	I0916 14:14:17.264597  753338 command_runner.go:130] >   containers_image_ostree_stub
	I0916 14:14:17.264604  753338 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 14:14:17.264613  753338 command_runner.go:130] >   btrfs_noversion
	I0916 14:14:17.264620  753338 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 14:14:17.264626  753338 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 14:14:17.264636  753338 command_runner.go:130] >   seccomp
	I0916 14:14:17.264648  753338 command_runner.go:130] > LDFlags:          unknown
	I0916 14:14:17.264654  753338 command_runner.go:130] > SeccompEnabled:   true
	I0916 14:14:17.264659  753338 command_runner.go:130] > AppArmorEnabled:  false
	I0916 14:14:17.264770  753338 ssh_runner.go:195] Run: crio --version
	I0916 14:14:17.291130  753338 command_runner.go:130] > crio version 1.29.1
	I0916 14:14:17.291153  753338 command_runner.go:130] > Version:        1.29.1
	I0916 14:14:17.291162  753338 command_runner.go:130] > GitCommit:      unknown
	I0916 14:14:17.291169  753338 command_runner.go:130] > GitCommitDate:  unknown
	I0916 14:14:17.291175  753338 command_runner.go:130] > GitTreeState:   clean
	I0916 14:14:17.291189  753338 command_runner.go:130] > BuildDate:      2024-09-15T21:21:56Z
	I0916 14:14:17.291197  753338 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 14:14:17.291206  753338 command_runner.go:130] > Compiler:       gc
	I0916 14:14:17.291213  753338 command_runner.go:130] > Platform:       linux/amd64
	I0916 14:14:17.291223  753338 command_runner.go:130] > Linkmode:       dynamic
	I0916 14:14:17.291233  753338 command_runner.go:130] > BuildTags:      
	I0916 14:14:17.291241  753338 command_runner.go:130] >   containers_image_ostree_stub
	I0916 14:14:17.291251  753338 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 14:14:17.291260  753338 command_runner.go:130] >   btrfs_noversion
	I0916 14:14:17.291269  753338 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 14:14:17.291278  753338 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 14:14:17.291283  753338 command_runner.go:130] >   seccomp
	I0916 14:14:17.291292  753338 command_runner.go:130] > LDFlags:          unknown
	I0916 14:14:17.291301  753338 command_runner.go:130] > SeccompEnabled:   true
	I0916 14:14:17.291311  753338 command_runner.go:130] > AppArmorEnabled:  false
	I0916 14:14:17.294866  753338 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 14:14:17.296070  753338 main.go:141] libmachine: (multinode-561755) Calling .GetIP
	I0916 14:14:17.298681  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:14:17.298993  753338 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:14:17.299024  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:14:17.299203  753338 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 14:14:17.303474  753338 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0916 14:14:17.303585  753338 kubeadm.go:883] updating cluster {Name:multinode-561755 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-561755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.34 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.132 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 14:14:17.303763  753338 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 14:14:17.303816  753338 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 14:14:17.345252  753338 command_runner.go:130] > {
	I0916 14:14:17.345270  753338 command_runner.go:130] >   "images": [
	I0916 14:14:17.345274  753338 command_runner.go:130] >     {
	I0916 14:14:17.345281  753338 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 14:14:17.345287  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.345296  753338 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 14:14:17.345302  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345308  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.345323  753338 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 14:14:17.345338  753338 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 14:14:17.345343  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345348  753338 command_runner.go:130] >       "size": "87190579",
	I0916 14:14:17.345355  753338 command_runner.go:130] >       "uid": null,
	I0916 14:14:17.345358  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.345364  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.345370  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.345374  753338 command_runner.go:130] >     },
	I0916 14:14:17.345378  753338 command_runner.go:130] >     {
	I0916 14:14:17.345384  753338 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0916 14:14:17.345391  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.345396  753338 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0916 14:14:17.345401  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345405  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.345412  753338 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0916 14:14:17.345421  753338 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0916 14:14:17.345424  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345428  753338 command_runner.go:130] >       "size": "1363676",
	I0916 14:14:17.345432  753338 command_runner.go:130] >       "uid": null,
	I0916 14:14:17.345441  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.345449  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.345453  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.345458  753338 command_runner.go:130] >     },
	I0916 14:14:17.345461  753338 command_runner.go:130] >     {
	I0916 14:14:17.345469  753338 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 14:14:17.345473  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.345478  753338 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 14:14:17.345484  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345488  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.345497  753338 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 14:14:17.345507  753338 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 14:14:17.345511  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345515  753338 command_runner.go:130] >       "size": "31470524",
	I0916 14:14:17.345521  753338 command_runner.go:130] >       "uid": null,
	I0916 14:14:17.345525  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.345531  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.345535  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.345538  753338 command_runner.go:130] >     },
	I0916 14:14:17.345541  753338 command_runner.go:130] >     {
	I0916 14:14:17.345547  753338 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 14:14:17.345560  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.345567  753338 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 14:14:17.345570  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345574  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.345583  753338 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 14:14:17.345595  753338 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 14:14:17.345601  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345604  753338 command_runner.go:130] >       "size": "63273227",
	I0916 14:14:17.345609  753338 command_runner.go:130] >       "uid": null,
	I0916 14:14:17.345612  753338 command_runner.go:130] >       "username": "nonroot",
	I0916 14:14:17.345621  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.345627  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.345632  753338 command_runner.go:130] >     },
	I0916 14:14:17.345638  753338 command_runner.go:130] >     {
	I0916 14:14:17.345648  753338 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 14:14:17.345657  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.345665  753338 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 14:14:17.345690  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345697  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.345710  753338 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 14:14:17.345724  753338 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 14:14:17.345732  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345738  753338 command_runner.go:130] >       "size": "149009664",
	I0916 14:14:17.345746  753338 command_runner.go:130] >       "uid": {
	I0916 14:14:17.345752  753338 command_runner.go:130] >         "value": "0"
	I0916 14:14:17.345760  753338 command_runner.go:130] >       },
	I0916 14:14:17.345766  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.345775  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.345781  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.345789  753338 command_runner.go:130] >     },
	I0916 14:14:17.345794  753338 command_runner.go:130] >     {
	I0916 14:14:17.345803  753338 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 14:14:17.345811  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.345819  753338 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 14:14:17.345827  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345833  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.345847  753338 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 14:14:17.345859  753338 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 14:14:17.345865  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345869  753338 command_runner.go:130] >       "size": "95237600",
	I0916 14:14:17.345875  753338 command_runner.go:130] >       "uid": {
	I0916 14:14:17.345878  753338 command_runner.go:130] >         "value": "0"
	I0916 14:14:17.345882  753338 command_runner.go:130] >       },
	I0916 14:14:17.345886  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.345892  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.345896  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.345904  753338 command_runner.go:130] >     },
	I0916 14:14:17.345908  753338 command_runner.go:130] >     {
	I0916 14:14:17.345915  753338 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 14:14:17.345921  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.345927  753338 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 14:14:17.345932  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345936  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.345944  753338 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 14:14:17.345953  753338 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 14:14:17.345957  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345961  753338 command_runner.go:130] >       "size": "89437508",
	I0916 14:14:17.345967  753338 command_runner.go:130] >       "uid": {
	I0916 14:14:17.345971  753338 command_runner.go:130] >         "value": "0"
	I0916 14:14:17.345974  753338 command_runner.go:130] >       },
	I0916 14:14:17.345979  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.345985  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.345989  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.345992  753338 command_runner.go:130] >     },
	I0916 14:14:17.345995  753338 command_runner.go:130] >     {
	I0916 14:14:17.346001  753338 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 14:14:17.346007  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.346012  753338 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 14:14:17.346017  753338 command_runner.go:130] >       ],
	I0916 14:14:17.346021  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.346038  753338 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 14:14:17.346047  753338 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 14:14:17.346052  753338 command_runner.go:130] >       ],
	I0916 14:14:17.346056  753338 command_runner.go:130] >       "size": "92733849",
	I0916 14:14:17.346062  753338 command_runner.go:130] >       "uid": null,
	I0916 14:14:17.346066  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.346070  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.346076  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.346079  753338 command_runner.go:130] >     },
	I0916 14:14:17.346083  753338 command_runner.go:130] >     {
	I0916 14:14:17.346089  753338 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 14:14:17.346092  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.346097  753338 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 14:14:17.346100  753338 command_runner.go:130] >       ],
	I0916 14:14:17.346104  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.346111  753338 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 14:14:17.346117  753338 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 14:14:17.346121  753338 command_runner.go:130] >       ],
	I0916 14:14:17.346125  753338 command_runner.go:130] >       "size": "68420934",
	I0916 14:14:17.346128  753338 command_runner.go:130] >       "uid": {
	I0916 14:14:17.346132  753338 command_runner.go:130] >         "value": "0"
	I0916 14:14:17.346135  753338 command_runner.go:130] >       },
	I0916 14:14:17.346138  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.346142  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.346145  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.346148  753338 command_runner.go:130] >     },
	I0916 14:14:17.346151  753338 command_runner.go:130] >     {
	I0916 14:14:17.346156  753338 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 14:14:17.346159  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.346163  753338 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 14:14:17.346166  753338 command_runner.go:130] >       ],
	I0916 14:14:17.346170  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.346177  753338 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 14:14:17.346183  753338 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 14:14:17.346186  753338 command_runner.go:130] >       ],
	I0916 14:14:17.346189  753338 command_runner.go:130] >       "size": "742080",
	I0916 14:14:17.346193  753338 command_runner.go:130] >       "uid": {
	I0916 14:14:17.346196  753338 command_runner.go:130] >         "value": "65535"
	I0916 14:14:17.346199  753338 command_runner.go:130] >       },
	I0916 14:14:17.346203  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.346207  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.346210  753338 command_runner.go:130] >       "pinned": true
	I0916 14:14:17.346214  753338 command_runner.go:130] >     }
	I0916 14:14:17.346217  753338 command_runner.go:130] >   ]
	I0916 14:14:17.346220  753338 command_runner.go:130] > }
	I0916 14:14:17.346770  753338 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 14:14:17.346788  753338 crio.go:433] Images already preloaded, skipping extraction
	I0916 14:14:17.346843  753338 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 14:14:17.377588  753338 command_runner.go:130] > {
	I0916 14:14:17.377611  753338 command_runner.go:130] >   "images": [
	I0916 14:14:17.377615  753338 command_runner.go:130] >     {
	I0916 14:14:17.377623  753338 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 14:14:17.377628  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.377634  753338 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 14:14:17.377638  753338 command_runner.go:130] >       ],
	I0916 14:14:17.377642  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.377650  753338 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 14:14:17.377657  753338 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 14:14:17.377661  753338 command_runner.go:130] >       ],
	I0916 14:14:17.377677  753338 command_runner.go:130] >       "size": "87190579",
	I0916 14:14:17.377681  753338 command_runner.go:130] >       "uid": null,
	I0916 14:14:17.377707  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.377725  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.377730  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.377734  753338 command_runner.go:130] >     },
	I0916 14:14:17.377737  753338 command_runner.go:130] >     {
	I0916 14:14:17.377743  753338 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0916 14:14:17.377749  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.377756  753338 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0916 14:14:17.377762  753338 command_runner.go:130] >       ],
	I0916 14:14:17.377766  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.377775  753338 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0916 14:14:17.377782  753338 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0916 14:14:17.377787  753338 command_runner.go:130] >       ],
	I0916 14:14:17.377792  753338 command_runner.go:130] >       "size": "1363676",
	I0916 14:14:17.377796  753338 command_runner.go:130] >       "uid": null,
	I0916 14:14:17.377805  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.377809  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.377813  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.377818  753338 command_runner.go:130] >     },
	I0916 14:14:17.377822  753338 command_runner.go:130] >     {
	I0916 14:14:17.377828  753338 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 14:14:17.377833  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.377838  753338 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 14:14:17.377841  753338 command_runner.go:130] >       ],
	I0916 14:14:17.377848  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.377855  753338 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 14:14:17.377863  753338 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 14:14:17.377866  753338 command_runner.go:130] >       ],
	I0916 14:14:17.377870  753338 command_runner.go:130] >       "size": "31470524",
	I0916 14:14:17.377874  753338 command_runner.go:130] >       "uid": null,
	I0916 14:14:17.377878  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.377882  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.377886  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.377890  753338 command_runner.go:130] >     },
	I0916 14:14:17.377893  753338 command_runner.go:130] >     {
	I0916 14:14:17.377899  753338 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 14:14:17.377904  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.377909  753338 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 14:14:17.377912  753338 command_runner.go:130] >       ],
	I0916 14:14:17.377916  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.377923  753338 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 14:14:17.377934  753338 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 14:14:17.377938  753338 command_runner.go:130] >       ],
	I0916 14:14:17.377943  753338 command_runner.go:130] >       "size": "63273227",
	I0916 14:14:17.377947  753338 command_runner.go:130] >       "uid": null,
	I0916 14:14:17.377955  753338 command_runner.go:130] >       "username": "nonroot",
	I0916 14:14:17.377960  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.377964  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.377969  753338 command_runner.go:130] >     },
	I0916 14:14:17.377972  753338 command_runner.go:130] >     {
	I0916 14:14:17.377978  753338 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 14:14:17.377981  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.377986  753338 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 14:14:17.377989  753338 command_runner.go:130] >       ],
	I0916 14:14:17.377993  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.377999  753338 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 14:14:17.378007  753338 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 14:14:17.378010  753338 command_runner.go:130] >       ],
	I0916 14:14:17.378014  753338 command_runner.go:130] >       "size": "149009664",
	I0916 14:14:17.378019  753338 command_runner.go:130] >       "uid": {
	I0916 14:14:17.378022  753338 command_runner.go:130] >         "value": "0"
	I0916 14:14:17.378025  753338 command_runner.go:130] >       },
	I0916 14:14:17.378029  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.378034  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.378037  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.378041  753338 command_runner.go:130] >     },
	I0916 14:14:17.378044  753338 command_runner.go:130] >     {
	I0916 14:14:17.378050  753338 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 14:14:17.378055  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.378060  753338 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 14:14:17.378063  753338 command_runner.go:130] >       ],
	I0916 14:14:17.378068  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.378075  753338 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 14:14:17.378082  753338 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 14:14:17.378086  753338 command_runner.go:130] >       ],
	I0916 14:14:17.378090  753338 command_runner.go:130] >       "size": "95237600",
	I0916 14:14:17.378094  753338 command_runner.go:130] >       "uid": {
	I0916 14:14:17.378098  753338 command_runner.go:130] >         "value": "0"
	I0916 14:14:17.378104  753338 command_runner.go:130] >       },
	I0916 14:14:17.378108  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.378112  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.378116  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.378119  753338 command_runner.go:130] >     },
	I0916 14:14:17.378122  753338 command_runner.go:130] >     {
	I0916 14:14:17.378130  753338 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 14:14:17.378134  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.378139  753338 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 14:14:17.378142  753338 command_runner.go:130] >       ],
	I0916 14:14:17.378146  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.378154  753338 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 14:14:17.378163  753338 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 14:14:17.378169  753338 command_runner.go:130] >       ],
	I0916 14:14:17.378175  753338 command_runner.go:130] >       "size": "89437508",
	I0916 14:14:17.378179  753338 command_runner.go:130] >       "uid": {
	I0916 14:14:17.378185  753338 command_runner.go:130] >         "value": "0"
	I0916 14:14:17.378188  753338 command_runner.go:130] >       },
	I0916 14:14:17.378192  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.378199  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.378202  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.378206  753338 command_runner.go:130] >     },
	I0916 14:14:17.378211  753338 command_runner.go:130] >     {
	I0916 14:14:17.378217  753338 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 14:14:17.378222  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.378227  753338 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 14:14:17.378230  753338 command_runner.go:130] >       ],
	I0916 14:14:17.378234  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.378248  753338 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 14:14:17.378255  753338 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 14:14:17.378258  753338 command_runner.go:130] >       ],
	I0916 14:14:17.378263  753338 command_runner.go:130] >       "size": "92733849",
	I0916 14:14:17.378266  753338 command_runner.go:130] >       "uid": null,
	I0916 14:14:17.378270  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.378274  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.378278  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.378281  753338 command_runner.go:130] >     },
	I0916 14:14:17.378284  753338 command_runner.go:130] >     {
	I0916 14:14:17.378289  753338 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 14:14:17.378293  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.378298  753338 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 14:14:17.378301  753338 command_runner.go:130] >       ],
	I0916 14:14:17.378307  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.378314  753338 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 14:14:17.378321  753338 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 14:14:17.378324  753338 command_runner.go:130] >       ],
	I0916 14:14:17.378329  753338 command_runner.go:130] >       "size": "68420934",
	I0916 14:14:17.378332  753338 command_runner.go:130] >       "uid": {
	I0916 14:14:17.378336  753338 command_runner.go:130] >         "value": "0"
	I0916 14:14:17.378340  753338 command_runner.go:130] >       },
	I0916 14:14:17.378343  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.378347  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.378351  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.378355  753338 command_runner.go:130] >     },
	I0916 14:14:17.378358  753338 command_runner.go:130] >     {
	I0916 14:14:17.378364  753338 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 14:14:17.378369  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.378374  753338 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 14:14:17.378381  753338 command_runner.go:130] >       ],
	I0916 14:14:17.378385  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.378392  753338 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 14:14:17.378404  753338 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 14:14:17.378407  753338 command_runner.go:130] >       ],
	I0916 14:14:17.378411  753338 command_runner.go:130] >       "size": "742080",
	I0916 14:14:17.378416  753338 command_runner.go:130] >       "uid": {
	I0916 14:14:17.378420  753338 command_runner.go:130] >         "value": "65535"
	I0916 14:14:17.378423  753338 command_runner.go:130] >       },
	I0916 14:14:17.378427  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.378431  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.378435  753338 command_runner.go:130] >       "pinned": true
	I0916 14:14:17.378441  753338 command_runner.go:130] >     }
	I0916 14:14:17.378444  753338 command_runner.go:130] >   ]
	I0916 14:14:17.378448  753338 command_runner.go:130] > }
	I0916 14:14:17.378773  753338 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 14:14:17.378795  753338 cache_images.go:84] Images are preloaded, skipping loading
	I0916 14:14:17.378808  753338 kubeadm.go:934] updating node { 192.168.39.163 8443 v1.31.1 crio true true} ...
	I0916 14:14:17.378959  753338 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-561755 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-561755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 14:14:17.379047  753338 ssh_runner.go:195] Run: crio config
	I0916 14:14:17.420896  753338 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0916 14:14:17.420930  753338 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0916 14:14:17.420940  753338 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0916 14:14:17.420946  753338 command_runner.go:130] > #
	I0916 14:14:17.420957  753338 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0916 14:14:17.420966  753338 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0916 14:14:17.420976  753338 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0916 14:14:17.420987  753338 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0916 14:14:17.420994  753338 command_runner.go:130] > # reload'.
	I0916 14:14:17.421004  753338 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0916 14:14:17.421019  753338 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0916 14:14:17.421031  753338 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0916 14:14:17.421042  753338 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0916 14:14:17.421053  753338 command_runner.go:130] > [crio]
	I0916 14:14:17.421063  753338 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0916 14:14:17.421073  753338 command_runner.go:130] > # containers images, in this directory.
	I0916 14:14:17.421083  753338 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0916 14:14:17.421123  753338 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0916 14:14:17.421163  753338 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0916 14:14:17.421186  753338 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0916 14:14:17.421384  753338 command_runner.go:130] > # imagestore = ""
	I0916 14:14:17.421400  753338 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0916 14:14:17.421409  753338 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0916 14:14:17.421518  753338 command_runner.go:130] > storage_driver = "overlay"
	I0916 14:14:17.421534  753338 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0916 14:14:17.421544  753338 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0916 14:14:17.421550  753338 command_runner.go:130] > storage_option = [
	I0916 14:14:17.421759  753338 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0916 14:14:17.421832  753338 command_runner.go:130] > ]
	I0916 14:14:17.421848  753338 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0916 14:14:17.421857  753338 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0916 14:14:17.422192  753338 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0916 14:14:17.422206  753338 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0916 14:14:17.422216  753338 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0916 14:14:17.422224  753338 command_runner.go:130] > # always happen on a node reboot
	I0916 14:14:17.422512  753338 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0916 14:14:17.422533  753338 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0916 14:14:17.422545  753338 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0916 14:14:17.422554  753338 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0916 14:14:17.422737  753338 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0916 14:14:17.422756  753338 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0916 14:14:17.422769  753338 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0916 14:14:17.422984  753338 command_runner.go:130] > # internal_wipe = true
	I0916 14:14:17.423007  753338 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0916 14:14:17.423018  753338 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0916 14:14:17.423245  753338 command_runner.go:130] > # internal_repair = false
	I0916 14:14:17.423256  753338 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0916 14:14:17.423262  753338 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0916 14:14:17.423267  753338 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0916 14:14:17.423487  753338 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0916 14:14:17.423502  753338 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0916 14:14:17.423508  753338 command_runner.go:130] > [crio.api]
	I0916 14:14:17.423516  753338 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0916 14:14:17.423951  753338 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0916 14:14:17.423980  753338 command_runner.go:130] > # IP address on which the stream server will listen.
	I0916 14:14:17.424253  753338 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0916 14:14:17.424273  753338 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0916 14:14:17.424281  753338 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0916 14:14:17.424550  753338 command_runner.go:130] > # stream_port = "0"
	I0916 14:14:17.424567  753338 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0916 14:14:17.424902  753338 command_runner.go:130] > # stream_enable_tls = false
	I0916 14:14:17.424919  753338 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0916 14:14:17.425251  753338 command_runner.go:130] > # stream_idle_timeout = ""
	I0916 14:14:17.425267  753338 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0916 14:14:17.425273  753338 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0916 14:14:17.425277  753338 command_runner.go:130] > # minutes.
	I0916 14:14:17.425515  753338 command_runner.go:130] > # stream_tls_cert = ""
	I0916 14:14:17.425530  753338 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0916 14:14:17.425539  753338 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0916 14:14:17.425839  753338 command_runner.go:130] > # stream_tls_key = ""
	I0916 14:14:17.425855  753338 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0916 14:14:17.425865  753338 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0916 14:14:17.425885  753338 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0916 14:14:17.426078  753338 command_runner.go:130] > # stream_tls_ca = ""
	I0916 14:14:17.426098  753338 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0916 14:14:17.426250  753338 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0916 14:14:17.426270  753338 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0916 14:14:17.426446  753338 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0916 14:14:17.426462  753338 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0916 14:14:17.426470  753338 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0916 14:14:17.426477  753338 command_runner.go:130] > [crio.runtime]
	I0916 14:14:17.426487  753338 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0916 14:14:17.426498  753338 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0916 14:14:17.426507  753338 command_runner.go:130] > # "nofile=1024:2048"
	I0916 14:14:17.426517  753338 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0916 14:14:17.426596  753338 command_runner.go:130] > # default_ulimits = [
	I0916 14:14:17.426814  753338 command_runner.go:130] > # ]
	I0916 14:14:17.426830  753338 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0916 14:14:17.427109  753338 command_runner.go:130] > # no_pivot = false
	I0916 14:14:17.427123  753338 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0916 14:14:17.427133  753338 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0916 14:14:17.427430  753338 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0916 14:14:17.427453  753338 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0916 14:14:17.427465  753338 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0916 14:14:17.427475  753338 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 14:14:17.427585  753338 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0916 14:14:17.427596  753338 command_runner.go:130] > # Cgroup setting for conmon
	I0916 14:14:17.427606  753338 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0916 14:14:17.427838  753338 command_runner.go:130] > conmon_cgroup = "pod"
	I0916 14:14:17.427854  753338 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0916 14:14:17.427865  753338 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0916 14:14:17.427878  753338 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 14:14:17.427884  753338 command_runner.go:130] > conmon_env = [
	I0916 14:14:17.427951  753338 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0916 14:14:17.428007  753338 command_runner.go:130] > ]
	I0916 14:14:17.428020  753338 command_runner.go:130] > # Additional environment variables to set for all the
	I0916 14:14:17.428029  753338 command_runner.go:130] > # containers. These are overridden if set in the
	I0916 14:14:17.428041  753338 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0916 14:14:17.428881  753338 command_runner.go:130] > # default_env = [
	I0916 14:14:17.428895  753338 command_runner.go:130] > # ]
	I0916 14:14:17.428905  753338 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0916 14:14:17.428916  753338 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0916 14:14:17.428922  753338 command_runner.go:130] > # selinux = false
	I0916 14:14:17.428931  753338 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0916 14:14:17.428939  753338 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0916 14:14:17.428947  753338 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0916 14:14:17.428953  753338 command_runner.go:130] > # seccomp_profile = ""
	I0916 14:14:17.428961  753338 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0916 14:14:17.428974  753338 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0916 14:14:17.428984  753338 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0916 14:14:17.428994  753338 command_runner.go:130] > # which might increase security.
	I0916 14:14:17.429001  753338 command_runner.go:130] > # This option is currently deprecated,
	I0916 14:14:17.429011  753338 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0916 14:14:17.429020  753338 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0916 14:14:17.429032  753338 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0916 14:14:17.429045  753338 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0916 14:14:17.429059  753338 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0916 14:14:17.429072  753338 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0916 14:14:17.429079  753338 command_runner.go:130] > # This option supports live configuration reload.
	I0916 14:14:17.429096  753338 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0916 14:14:17.429110  753338 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0916 14:14:17.429123  753338 command_runner.go:130] > # the cgroup blockio controller.
	I0916 14:14:17.429133  753338 command_runner.go:130] > # blockio_config_file = ""
	I0916 14:14:17.429149  753338 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0916 14:14:17.429158  753338 command_runner.go:130] > # blockio parameters.
	I0916 14:14:17.429164  753338 command_runner.go:130] > # blockio_reload = false
	I0916 14:14:17.429175  753338 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0916 14:14:17.429181  753338 command_runner.go:130] > # irqbalance daemon.
	I0916 14:14:17.429190  753338 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0916 14:14:17.429201  753338 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0916 14:14:17.429215  753338 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0916 14:14:17.429227  753338 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0916 14:14:17.429239  753338 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0916 14:14:17.429252  753338 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0916 14:14:17.429263  753338 command_runner.go:130] > # This option supports live configuration reload.
	I0916 14:14:17.429271  753338 command_runner.go:130] > # rdt_config_file = ""
	I0916 14:14:17.429280  753338 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0916 14:14:17.429290  753338 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0916 14:14:17.429312  753338 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0916 14:14:17.429325  753338 command_runner.go:130] > # separate_pull_cgroup = ""
	I0916 14:14:17.429336  753338 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0916 14:14:17.429349  753338 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0916 14:14:17.429358  753338 command_runner.go:130] > # will be added.
	I0916 14:14:17.429367  753338 command_runner.go:130] > # default_capabilities = [
	I0916 14:14:17.429375  753338 command_runner.go:130] > # 	"CHOWN",
	I0916 14:14:17.429381  753338 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0916 14:14:17.429390  753338 command_runner.go:130] > # 	"FSETID",
	I0916 14:14:17.429396  753338 command_runner.go:130] > # 	"FOWNER",
	I0916 14:14:17.429405  753338 command_runner.go:130] > # 	"SETGID",
	I0916 14:14:17.429412  753338 command_runner.go:130] > # 	"SETUID",
	I0916 14:14:17.429420  753338 command_runner.go:130] > # 	"SETPCAP",
	I0916 14:14:17.429427  753338 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0916 14:14:17.429435  753338 command_runner.go:130] > # 	"KILL",
	I0916 14:14:17.429443  753338 command_runner.go:130] > # ]
	I0916 14:14:17.429457  753338 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0916 14:14:17.429474  753338 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0916 14:14:17.429486  753338 command_runner.go:130] > # add_inheritable_capabilities = false
	I0916 14:14:17.429501  753338 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0916 14:14:17.429513  753338 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 14:14:17.429523  753338 command_runner.go:130] > default_sysctls = [
	I0916 14:14:17.429535  753338 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0916 14:14:17.429543  753338 command_runner.go:130] > ]
	I0916 14:14:17.429551  753338 command_runner.go:130] > # List of devices on the host that a
	I0916 14:14:17.429564  753338 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0916 14:14:17.429573  753338 command_runner.go:130] > # allowed_devices = [
	I0916 14:14:17.429578  753338 command_runner.go:130] > # 	"/dev/fuse",
	I0916 14:14:17.429583  753338 command_runner.go:130] > # ]
	I0916 14:14:17.429592  753338 command_runner.go:130] > # List of additional devices. specified as
	I0916 14:14:17.429606  753338 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0916 14:14:17.429622  753338 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0916 14:14:17.429634  753338 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 14:14:17.429643  753338 command_runner.go:130] > # additional_devices = [
	I0916 14:14:17.429648  753338 command_runner.go:130] > # ]
	I0916 14:14:17.429659  753338 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0916 14:14:17.429681  753338 command_runner.go:130] > # cdi_spec_dirs = [
	I0916 14:14:17.429688  753338 command_runner.go:130] > # 	"/etc/cdi",
	I0916 14:14:17.429694  753338 command_runner.go:130] > # 	"/var/run/cdi",
	I0916 14:14:17.429699  753338 command_runner.go:130] > # ]
	I0916 14:14:17.429713  753338 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0916 14:14:17.429725  753338 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0916 14:14:17.429734  753338 command_runner.go:130] > # Defaults to false.
	I0916 14:14:17.429741  753338 command_runner.go:130] > # device_ownership_from_security_context = false
	I0916 14:14:17.429754  753338 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0916 14:14:17.429769  753338 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0916 14:14:17.429777  753338 command_runner.go:130] > # hooks_dir = [
	I0916 14:14:17.429784  753338 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0916 14:14:17.429790  753338 command_runner.go:130] > # ]
	I0916 14:14:17.429802  753338 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0916 14:14:17.429817  753338 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0916 14:14:17.429830  753338 command_runner.go:130] > # its default mounts from the following two files:
	I0916 14:14:17.429837  753338 command_runner.go:130] > #
	I0916 14:14:17.429846  753338 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0916 14:14:17.429856  753338 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0916 14:14:17.429865  753338 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0916 14:14:17.429871  753338 command_runner.go:130] > #
	I0916 14:14:17.429878  753338 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0916 14:14:17.429886  753338 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0916 14:14:17.429898  753338 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0916 14:14:17.429908  753338 command_runner.go:130] > #      only add mounts it finds in this file.
	I0916 14:14:17.429913  753338 command_runner.go:130] > #
	I0916 14:14:17.429920  753338 command_runner.go:130] > # default_mounts_file = ""
	I0916 14:14:17.429938  753338 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0916 14:14:17.429951  753338 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0916 14:14:17.429958  753338 command_runner.go:130] > pids_limit = 1024
	I0916 14:14:17.429970  753338 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0916 14:14:17.429982  753338 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0916 14:14:17.429993  753338 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0916 14:14:17.430009  753338 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0916 14:14:17.430018  753338 command_runner.go:130] > # log_size_max = -1
	I0916 14:14:17.430029  753338 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0916 14:14:17.430038  753338 command_runner.go:130] > # log_to_journald = false
	I0916 14:14:17.430050  753338 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0916 14:14:17.430060  753338 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0916 14:14:17.430071  753338 command_runner.go:130] > # Path to directory for container attach sockets.
	I0916 14:14:17.430078  753338 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0916 14:14:17.430089  753338 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0916 14:14:17.430099  753338 command_runner.go:130] > # bind_mount_prefix = ""
	I0916 14:14:17.430111  753338 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0916 14:14:17.430119  753338 command_runner.go:130] > # read_only = false
	I0916 14:14:17.430131  753338 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0916 14:14:17.430143  753338 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0916 14:14:17.430152  753338 command_runner.go:130] > # live configuration reload.
	I0916 14:14:17.430163  753338 command_runner.go:130] > # log_level = "info"
	I0916 14:14:17.430175  753338 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0916 14:14:17.430186  753338 command_runner.go:130] > # This option supports live configuration reload.
	I0916 14:14:17.430195  753338 command_runner.go:130] > # log_filter = ""
	I0916 14:14:17.430206  753338 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0916 14:14:17.430221  753338 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0916 14:14:17.430230  753338 command_runner.go:130] > # separated by comma.
	I0916 14:14:17.430244  753338 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 14:14:17.430253  753338 command_runner.go:130] > # uid_mappings = ""
	I0916 14:14:17.430262  753338 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0916 14:14:17.430275  753338 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0916 14:14:17.430282  753338 command_runner.go:130] > # separated by comma.
	I0916 14:14:17.430292  753338 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 14:14:17.430300  753338 command_runner.go:130] > # gid_mappings = ""
	I0916 14:14:17.430311  753338 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0916 14:14:17.430323  753338 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 14:14:17.430340  753338 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 14:14:17.430356  753338 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 14:14:17.430366  753338 command_runner.go:130] > # minimum_mappable_uid = -1
	I0916 14:14:17.430378  753338 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0916 14:14:17.430389  753338 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 14:14:17.430402  753338 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 14:14:17.430419  753338 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 14:14:17.430429  753338 command_runner.go:130] > # minimum_mappable_gid = -1
	I0916 14:14:17.430438  753338 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0916 14:14:17.430450  753338 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0916 14:14:17.430459  753338 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0916 14:14:17.430468  753338 command_runner.go:130] > # ctr_stop_timeout = 30
	I0916 14:14:17.430477  753338 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0916 14:14:17.430489  753338 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0916 14:14:17.430499  753338 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0916 14:14:17.430510  753338 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0916 14:14:17.430519  753338 command_runner.go:130] > drop_infra_ctr = false
	I0916 14:14:17.430531  753338 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0916 14:14:17.430542  753338 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0916 14:14:17.430558  753338 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0916 14:14:17.430567  753338 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0916 14:14:17.430577  753338 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0916 14:14:17.430589  753338 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0916 14:14:17.430600  753338 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0916 14:14:17.430610  753338 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0916 14:14:17.430623  753338 command_runner.go:130] > # shared_cpuset = ""
	I0916 14:14:17.430635  753338 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0916 14:14:17.430645  753338 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0916 14:14:17.430655  753338 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0916 14:14:17.430668  753338 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0916 14:14:17.430678  753338 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0916 14:14:17.430691  753338 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0916 14:14:17.430703  753338 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0916 14:14:17.430712  753338 command_runner.go:130] > # enable_criu_support = false
	I0916 14:14:17.430723  753338 command_runner.go:130] > # Enable/disable the generation of the container,
	I0916 14:14:17.430737  753338 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0916 14:14:17.430747  753338 command_runner.go:130] > # enable_pod_events = false
	I0916 14:14:17.430757  753338 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 14:14:17.430770  753338 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 14:14:17.430780  753338 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0916 14:14:17.430787  753338 command_runner.go:130] > # default_runtime = "runc"
	I0916 14:14:17.430797  753338 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0916 14:14:17.430809  753338 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0916 14:14:17.430824  753338 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0916 14:14:17.430835  753338 command_runner.go:130] > # creation as a file is not desired either.
	I0916 14:14:17.430849  753338 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0916 14:14:17.430859  753338 command_runner.go:130] > # the hostname is being managed dynamically.
	I0916 14:14:17.430869  753338 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0916 14:14:17.430877  753338 command_runner.go:130] > # ]
	I0916 14:14:17.430886  753338 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0916 14:14:17.430902  753338 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0916 14:14:17.430914  753338 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0916 14:14:17.430922  753338 command_runner.go:130] > # Each entry in the table should follow the format:
	I0916 14:14:17.430930  753338 command_runner.go:130] > #
	I0916 14:14:17.430938  753338 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0916 14:14:17.430949  753338 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0916 14:14:17.430982  753338 command_runner.go:130] > # runtime_type = "oci"
	I0916 14:14:17.430996  753338 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0916 14:14:17.431003  753338 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0916 14:14:17.431010  753338 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0916 14:14:17.431021  753338 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0916 14:14:17.431027  753338 command_runner.go:130] > # monitor_env = []
	I0916 14:14:17.431038  753338 command_runner.go:130] > # privileged_without_host_devices = false
	I0916 14:14:17.431045  753338 command_runner.go:130] > # allowed_annotations = []
	I0916 14:14:17.431058  753338 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0916 14:14:17.431066  753338 command_runner.go:130] > # Where:
	I0916 14:14:17.431075  753338 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0916 14:14:17.431087  753338 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0916 14:14:17.431099  753338 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0916 14:14:17.431111  753338 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0916 14:14:17.431120  753338 command_runner.go:130] > #   in $PATH.
	I0916 14:14:17.431130  753338 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0916 14:14:17.431140  753338 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0916 14:14:17.431154  753338 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0916 14:14:17.431164  753338 command_runner.go:130] > #   state.
	I0916 14:14:17.431174  753338 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0916 14:14:17.431186  753338 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0916 14:14:17.431198  753338 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0916 14:14:17.431210  753338 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0916 14:14:17.431222  753338 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0916 14:14:17.431232  753338 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0916 14:14:17.431243  753338 command_runner.go:130] > #   The currently recognized values are:
	I0916 14:14:17.431256  753338 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0916 14:14:17.431271  753338 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0916 14:14:17.431284  753338 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0916 14:14:17.431293  753338 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0916 14:14:17.431308  753338 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0916 14:14:17.431321  753338 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0916 14:14:17.431330  753338 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0916 14:14:17.431343  753338 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0916 14:14:17.431355  753338 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0916 14:14:17.431367  753338 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0916 14:14:17.431377  753338 command_runner.go:130] > #   deprecated option "conmon".
	I0916 14:14:17.431389  753338 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0916 14:14:17.431399  753338 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0916 14:14:17.431413  753338 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0916 14:14:17.431423  753338 command_runner.go:130] > #   should be moved to the container's cgroup
	I0916 14:14:17.431435  753338 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0916 14:14:17.431446  753338 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0916 14:14:17.431456  753338 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0916 14:14:17.431464  753338 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0916 14:14:17.431467  753338 command_runner.go:130] > #
	I0916 14:14:17.431472  753338 command_runner.go:130] > # Using the seccomp notifier feature:
	I0916 14:14:17.431477  753338 command_runner.go:130] > #
	I0916 14:14:17.431483  753338 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0916 14:14:17.431490  753338 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0916 14:14:17.431498  753338 command_runner.go:130] > #
	I0916 14:14:17.431508  753338 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0916 14:14:17.431518  753338 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0916 14:14:17.431526  753338 command_runner.go:130] > #
	I0916 14:14:17.431536  753338 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0916 14:14:17.431544  753338 command_runner.go:130] > # feature.
	I0916 14:14:17.431549  753338 command_runner.go:130] > #
	I0916 14:14:17.431560  753338 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0916 14:14:17.431572  753338 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0916 14:14:17.431586  753338 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0916 14:14:17.431600  753338 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0916 14:14:17.431613  753338 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0916 14:14:17.431625  753338 command_runner.go:130] > #
	I0916 14:14:17.431634  753338 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0916 14:14:17.431643  753338 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0916 14:14:17.431652  753338 command_runner.go:130] > #
	I0916 14:14:17.431661  753338 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0916 14:14:17.431670  753338 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0916 14:14:17.431679  753338 command_runner.go:130] > #
	I0916 14:14:17.431688  753338 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0916 14:14:17.431700  753338 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0916 14:14:17.431711  753338 command_runner.go:130] > # limitation.
	I0916 14:14:17.431719  753338 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0916 14:14:17.431728  753338 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0916 14:14:17.431734  753338 command_runner.go:130] > runtime_type = "oci"
	I0916 14:14:17.431743  753338 command_runner.go:130] > runtime_root = "/run/runc"
	I0916 14:14:17.431750  753338 command_runner.go:130] > runtime_config_path = ""
	I0916 14:14:17.431761  753338 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0916 14:14:17.431770  753338 command_runner.go:130] > monitor_cgroup = "pod"
	I0916 14:14:17.431777  753338 command_runner.go:130] > monitor_exec_cgroup = ""
	I0916 14:14:17.431786  753338 command_runner.go:130] > monitor_env = [
	I0916 14:14:17.431796  753338 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0916 14:14:17.431805  753338 command_runner.go:130] > ]
	I0916 14:14:17.431816  753338 command_runner.go:130] > privileged_without_host_devices = false
	I0916 14:14:17.431826  753338 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0916 14:14:17.431837  753338 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0916 14:14:17.431851  753338 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0916 14:14:17.431865  753338 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0916 14:14:17.431879  753338 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0916 14:14:17.431891  753338 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0916 14:14:17.431913  753338 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0916 14:14:17.431930  753338 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0916 14:14:17.431940  753338 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0916 14:14:17.431952  753338 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0916 14:14:17.431960  753338 command_runner.go:130] > # Example:
	I0916 14:14:17.431967  753338 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0916 14:14:17.431977  753338 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0916 14:14:17.431987  753338 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0916 14:14:17.431995  753338 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0916 14:14:17.432004  753338 command_runner.go:130] > # cpuset = 0
	I0916 14:14:17.432010  753338 command_runner.go:130] > # cpushares = "0-1"
	I0916 14:14:17.432019  753338 command_runner.go:130] > # Where:
	I0916 14:14:17.432028  753338 command_runner.go:130] > # The workload name is workload-type.
	I0916 14:14:17.432041  753338 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0916 14:14:17.432052  753338 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0916 14:14:17.432062  753338 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0916 14:14:17.432075  753338 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0916 14:14:17.432086  753338 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0916 14:14:17.432096  753338 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0916 14:14:17.432106  753338 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0916 14:14:17.432116  753338 command_runner.go:130] > # Default value is set to true
	I0916 14:14:17.432123  753338 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0916 14:14:17.432130  753338 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0916 14:14:17.432137  753338 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0916 14:14:17.432142  753338 command_runner.go:130] > # Default value is set to 'false'
	I0916 14:14:17.432148  753338 command_runner.go:130] > # disable_hostport_mapping = false
	I0916 14:14:17.432155  753338 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0916 14:14:17.432160  753338 command_runner.go:130] > #
	I0916 14:14:17.432165  753338 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0916 14:14:17.432173  753338 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0916 14:14:17.432180  753338 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0916 14:14:17.432187  753338 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0916 14:14:17.432195  753338 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0916 14:14:17.432200  753338 command_runner.go:130] > [crio.image]
	I0916 14:14:17.432209  753338 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0916 14:14:17.432220  753338 command_runner.go:130] > # default_transport = "docker://"
	I0916 14:14:17.432238  753338 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0916 14:14:17.432248  753338 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0916 14:14:17.432253  753338 command_runner.go:130] > # global_auth_file = ""
	I0916 14:14:17.432262  753338 command_runner.go:130] > # The image used to instantiate infra containers.
	I0916 14:14:17.432269  753338 command_runner.go:130] > # This option supports live configuration reload.
	I0916 14:14:17.432276  753338 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0916 14:14:17.432285  753338 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0916 14:14:17.432294  753338 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0916 14:14:17.432303  753338 command_runner.go:130] > # This option supports live configuration reload.
	I0916 14:14:17.432310  753338 command_runner.go:130] > # pause_image_auth_file = ""
	I0916 14:14:17.432319  753338 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0916 14:14:17.432328  753338 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0916 14:14:17.432338  753338 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0916 14:14:17.432346  753338 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0916 14:14:17.432352  753338 command_runner.go:130] > # pause_command = "/pause"
	I0916 14:14:17.432361  753338 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0916 14:14:17.432370  753338 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0916 14:14:17.432379  753338 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0916 14:14:17.432390  753338 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0916 14:14:17.432399  753338 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0916 14:14:17.432408  753338 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0916 14:14:17.432418  753338 command_runner.go:130] > # pinned_images = [
	I0916 14:14:17.432424  753338 command_runner.go:130] > # ]
	I0916 14:14:17.432434  753338 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0916 14:14:17.432452  753338 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0916 14:14:17.432465  753338 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0916 14:14:17.432477  753338 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0916 14:14:17.432488  753338 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0916 14:14:17.432498  753338 command_runner.go:130] > # signature_policy = ""
	I0916 14:14:17.432510  753338 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0916 14:14:17.432523  753338 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0916 14:14:17.432536  753338 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0916 14:14:17.432549  753338 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0916 14:14:17.432563  753338 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0916 14:14:17.432573  753338 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0916 14:14:17.432588  753338 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0916 14:14:17.432601  753338 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0916 14:14:17.432609  753338 command_runner.go:130] > # changing them here.
	I0916 14:14:17.432623  753338 command_runner.go:130] > # insecure_registries = [
	I0916 14:14:17.432631  753338 command_runner.go:130] > # ]
	I0916 14:14:17.432640  753338 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0916 14:14:17.432649  753338 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0916 14:14:17.432656  753338 command_runner.go:130] > # image_volumes = "mkdir"
	I0916 14:14:17.432666  753338 command_runner.go:130] > # Temporary directory to use for storing big files
	I0916 14:14:17.432676  753338 command_runner.go:130] > # big_files_temporary_dir = ""
	I0916 14:14:17.432688  753338 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0916 14:14:17.432697  753338 command_runner.go:130] > # CNI plugins.
	I0916 14:14:17.432705  753338 command_runner.go:130] > [crio.network]
	I0916 14:14:17.432719  753338 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0916 14:14:17.432729  753338 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0916 14:14:17.432738  753338 command_runner.go:130] > # cni_default_network = ""
	I0916 14:14:17.432746  753338 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0916 14:14:17.432756  753338 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0916 14:14:17.432767  753338 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0916 14:14:17.432776  753338 command_runner.go:130] > # plugin_dirs = [
	I0916 14:14:17.432784  753338 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0916 14:14:17.432791  753338 command_runner.go:130] > # ]
	I0916 14:14:17.432799  753338 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0916 14:14:17.432812  753338 command_runner.go:130] > [crio.metrics]
	I0916 14:14:17.432823  753338 command_runner.go:130] > # Globally enable or disable metrics support.
	I0916 14:14:17.432831  753338 command_runner.go:130] > enable_metrics = true
	I0916 14:14:17.432841  753338 command_runner.go:130] > # Specify enabled metrics collectors.
	I0916 14:14:17.432851  753338 command_runner.go:130] > # Per default all metrics are enabled.
	I0916 14:14:17.432863  753338 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0916 14:14:17.432875  753338 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0916 14:14:17.432887  753338 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0916 14:14:17.432898  753338 command_runner.go:130] > # metrics_collectors = [
	I0916 14:14:17.432907  753338 command_runner.go:130] > # 	"operations",
	I0916 14:14:17.432913  753338 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0916 14:14:17.432926  753338 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0916 14:14:17.432932  753338 command_runner.go:130] > # 	"operations_errors",
	I0916 14:14:17.432939  753338 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0916 14:14:17.432949  753338 command_runner.go:130] > # 	"image_pulls_by_name",
	I0916 14:14:17.432956  753338 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0916 14:14:17.432965  753338 command_runner.go:130] > # 	"image_pulls_failures",
	I0916 14:14:17.432972  753338 command_runner.go:130] > # 	"image_pulls_successes",
	I0916 14:14:17.432979  753338 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0916 14:14:17.432988  753338 command_runner.go:130] > # 	"image_layer_reuse",
	I0916 14:14:17.432996  753338 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0916 14:14:17.433008  753338 command_runner.go:130] > # 	"containers_oom_total",
	I0916 14:14:17.433018  753338 command_runner.go:130] > # 	"containers_oom",
	I0916 14:14:17.433025  753338 command_runner.go:130] > # 	"processes_defunct",
	I0916 14:14:17.433034  753338 command_runner.go:130] > # 	"operations_total",
	I0916 14:14:17.433041  753338 command_runner.go:130] > # 	"operations_latency_seconds",
	I0916 14:14:17.433052  753338 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0916 14:14:17.433062  753338 command_runner.go:130] > # 	"operations_errors_total",
	I0916 14:14:17.433069  753338 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0916 14:14:17.433079  753338 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0916 14:14:17.433088  753338 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0916 14:14:17.433095  753338 command_runner.go:130] > # 	"image_pulls_success_total",
	I0916 14:14:17.433103  753338 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0916 14:14:17.433108  753338 command_runner.go:130] > # 	"containers_oom_count_total",
	I0916 14:14:17.433115  753338 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0916 14:14:17.433119  753338 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0916 14:14:17.433124  753338 command_runner.go:130] > # ]
	I0916 14:14:17.433131  753338 command_runner.go:130] > # The port on which the metrics server will listen.
	I0916 14:14:17.433137  753338 command_runner.go:130] > # metrics_port = 9090
	I0916 14:14:17.433142  753338 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0916 14:14:17.433147  753338 command_runner.go:130] > # metrics_socket = ""
	I0916 14:14:17.433153  753338 command_runner.go:130] > # The certificate for the secure metrics server.
	I0916 14:14:17.433160  753338 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0916 14:14:17.433167  753338 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0916 14:14:17.433174  753338 command_runner.go:130] > # certificate on any modification event.
	I0916 14:14:17.433178  753338 command_runner.go:130] > # metrics_cert = ""
	I0916 14:14:17.433185  753338 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0916 14:14:17.433190  753338 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0916 14:14:17.433196  753338 command_runner.go:130] > # metrics_key = ""
	I0916 14:14:17.433201  753338 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0916 14:14:17.433206  753338 command_runner.go:130] > [crio.tracing]
	I0916 14:14:17.433211  753338 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0916 14:14:17.433217  753338 command_runner.go:130] > # enable_tracing = false
	I0916 14:14:17.433223  753338 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0916 14:14:17.433229  753338 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0916 14:14:17.433236  753338 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0916 14:14:17.433243  753338 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0916 14:14:17.433247  753338 command_runner.go:130] > # CRI-O NRI configuration.
	I0916 14:14:17.433250  753338 command_runner.go:130] > [crio.nri]
	I0916 14:14:17.433254  753338 command_runner.go:130] > # Globally enable or disable NRI.
	I0916 14:14:17.433258  753338 command_runner.go:130] > # enable_nri = false
	I0916 14:14:17.433262  753338 command_runner.go:130] > # NRI socket to listen on.
	I0916 14:14:17.433265  753338 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0916 14:14:17.433269  753338 command_runner.go:130] > # NRI plugin directory to use.
	I0916 14:14:17.433273  753338 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0916 14:14:17.433282  753338 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0916 14:14:17.433287  753338 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0916 14:14:17.433291  753338 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0916 14:14:17.433295  753338 command_runner.go:130] > # nri_disable_connections = false
	I0916 14:14:17.433300  753338 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0916 14:14:17.433305  753338 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0916 14:14:17.433309  753338 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0916 14:14:17.433313  753338 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0916 14:14:17.433318  753338 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0916 14:14:17.433322  753338 command_runner.go:130] > [crio.stats]
	I0916 14:14:17.433328  753338 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0916 14:14:17.433333  753338 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0916 14:14:17.433338  753338 command_runner.go:130] > # stats_collection_period = 0
	I0916 14:14:17.434174  753338 command_runner.go:130] ! time="2024-09-16 14:14:17.386005595Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0916 14:14:17.434200  753338 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0916 14:14:17.434297  753338 cni.go:84] Creating CNI manager for ""
	I0916 14:14:17.434313  753338 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0916 14:14:17.434326  753338 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 14:14:17.434353  753338 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.163 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-561755 NodeName:multinode-561755 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 14:14:17.434498  753338 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.163
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-561755"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 14:14:17.434566  753338 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 14:14:17.445630  753338 command_runner.go:130] > kubeadm
	I0916 14:14:17.445647  753338 command_runner.go:130] > kubectl
	I0916 14:14:17.445653  753338 command_runner.go:130] > kubelet
	I0916 14:14:17.445690  753338 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 14:14:17.445744  753338 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 14:14:17.455924  753338 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0916 14:14:17.471996  753338 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 14:14:17.487794  753338 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0916 14:14:17.503932  753338 ssh_runner.go:195] Run: grep 192.168.39.163	control-plane.minikube.internal$ /etc/hosts
	I0916 14:14:17.507556  753338 command_runner.go:130] > 192.168.39.163	control-plane.minikube.internal
	I0916 14:14:17.507745  753338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 14:14:17.641121  753338 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 14:14:17.655275  753338 certs.go:68] Setting up /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/multinode-561755 for IP: 192.168.39.163
	I0916 14:14:17.655297  753338 certs.go:194] generating shared ca certs ...
	I0916 14:14:17.655314  753338 certs.go:226] acquiring lock for ca certs: {Name:mk25b35916ff3ff3777938e3e2b7794965f8a707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 14:14:17.655551  753338 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key
	I0916 14:14:17.655593  753338 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key
	I0916 14:14:17.655604  753338 certs.go:256] generating profile certs ...
	I0916 14:14:17.655685  753338 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/multinode-561755/client.key
	I0916 14:14:17.655765  753338 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/multinode-561755/apiserver.key.7781cfba
	I0916 14:14:17.655813  753338 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/multinode-561755/proxy-client.key
	I0916 14:14:17.655824  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 14:14:17.655843  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 14:14:17.655858  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 14:14:17.655869  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 14:14:17.655880  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/multinode-561755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 14:14:17.655891  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/multinode-561755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 14:14:17.655906  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/multinode-561755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 14:14:17.655917  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/multinode-561755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 14:14:17.655966  753338 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem (1338 bytes)
	W0916 14:14:17.655992  753338 certs.go:480] ignoring /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544_empty.pem, impossibly tiny 0 bytes
	I0916 14:14:17.656001  753338 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 14:14:17.656025  753338 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem (1082 bytes)
	I0916 14:14:17.656047  753338 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem (1123 bytes)
	I0916 14:14:17.656068  753338 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem (1679 bytes)
	I0916 14:14:17.656103  753338 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 14:14:17.656135  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> /usr/share/ca-certificates/7205442.pem
	I0916 14:14:17.656147  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 14:14:17.656159  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem -> /usr/share/ca-certificates/720544.pem
	I0916 14:14:17.656767  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 14:14:17.679159  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 14:14:17.701944  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 14:14:17.724454  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 14:14:17.747343  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/multinode-561755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 14:14:17.769899  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/multinode-561755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 14:14:17.792376  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/multinode-561755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 14:14:17.814743  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/multinode-561755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 14:14:17.837654  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /usr/share/ca-certificates/7205442.pem (1708 bytes)
	I0916 14:14:17.860202  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 14:14:17.882325  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem --> /usr/share/ca-certificates/720544.pem (1338 bytes)
	I0916 14:14:17.904362  753338 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 14:14:17.920182  753338 ssh_runner.go:195] Run: openssl version
	I0916 14:14:17.925765  753338 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0916 14:14:17.925836  753338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7205442.pem && ln -fs /usr/share/ca-certificates/7205442.pem /etc/ssl/certs/7205442.pem"
	I0916 14:14:17.937274  753338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7205442.pem
	I0916 14:14:17.941339  753338 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 13:33 /usr/share/ca-certificates/7205442.pem
	I0916 14:14:17.941438  753338 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 13:33 /usr/share/ca-certificates/7205442.pem
	I0916 14:14:17.941484  753338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7205442.pem
	I0916 14:14:17.946670  753338 command_runner.go:130] > 3ec20f2e
	I0916 14:14:17.946724  753338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7205442.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 14:14:17.955973  753338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 14:14:17.966537  753338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 14:14:17.970510  753338 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 12:53 /usr/share/ca-certificates/minikubeCA.pem
	I0916 14:14:17.970702  753338 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 12:53 /usr/share/ca-certificates/minikubeCA.pem
	I0916 14:14:17.970737  753338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 14:14:17.975687  753338 command_runner.go:130] > b5213941
	I0916 14:14:17.975982  753338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 14:14:17.985059  753338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/720544.pem && ln -fs /usr/share/ca-certificates/720544.pem /etc/ssl/certs/720544.pem"
	I0916 14:14:17.995368  753338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/720544.pem
	I0916 14:14:17.999419  753338 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 13:33 /usr/share/ca-certificates/720544.pem
	I0916 14:14:17.999644  753338 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 13:33 /usr/share/ca-certificates/720544.pem
	I0916 14:14:17.999696  753338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/720544.pem
	I0916 14:14:18.005130  753338 command_runner.go:130] > 51391683
	I0916 14:14:18.005181  753338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/720544.pem /etc/ssl/certs/51391683.0"
	I0916 14:14:18.014481  753338 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 14:14:18.018718  753338 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 14:14:18.018742  753338 command_runner.go:130] >   Size: 1172      	Blocks: 8          IO Block: 4096   regular file
	I0916 14:14:18.018750  753338 command_runner.go:130] > Device: 253,1	Inode: 9431080     Links: 1
	I0916 14:14:18.018759  753338 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 14:14:18.018768  753338 command_runner.go:130] > Access: 2024-09-16 14:07:37.838277121 +0000
	I0916 14:14:18.018775  753338 command_runner.go:130] > Modify: 2024-09-16 14:07:37.838277121 +0000
	I0916 14:14:18.018784  753338 command_runner.go:130] > Change: 2024-09-16 14:07:37.838277121 +0000
	I0916 14:14:18.018792  753338 command_runner.go:130] >  Birth: 2024-09-16 14:07:37.838277121 +0000
	I0916 14:14:18.018843  753338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 14:14:18.024179  753338 command_runner.go:130] > Certificate will not expire
	I0916 14:14:18.024241  753338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 14:14:18.029492  753338 command_runner.go:130] > Certificate will not expire
	I0916 14:14:18.029542  753338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 14:14:18.035041  753338 command_runner.go:130] > Certificate will not expire
	I0916 14:14:18.035095  753338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 14:14:18.040553  753338 command_runner.go:130] > Certificate will not expire
	I0916 14:14:18.040900  753338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 14:14:18.046062  753338 command_runner.go:130] > Certificate will not expire
	I0916 14:14:18.046117  753338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 14:14:18.051110  753338 command_runner.go:130] > Certificate will not expire
	I0916 14:14:18.051363  753338 kubeadm.go:392] StartCluster: {Name:multinode-561755 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-561755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.34 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.132 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 14:14:18.051472  753338 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 14:14:18.051510  753338 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 14:14:18.092004  753338 command_runner.go:130] > 038d0db591c9e5abc920c53e11e368e03ce9f5f56c252fe66d6adca7aecc76c7
	I0916 14:14:18.092031  753338 command_runner.go:130] > 481d5f837d21d98e89bdf18bf25bb6d2f3c38cf20ef42ae7c231defcb5ab24e0
	I0916 14:14:18.092038  753338 command_runner.go:130] > ad6237280bcbc8d08d158841602d786f89ad8b2507cbf2211ac22fbfedfd244a
	I0916 14:14:18.092044  753338 command_runner.go:130] > 9bbf062b56098221043af49349f3515a3514781797b5351608741e161512e0aa
	I0916 14:14:18.092049  753338 command_runner.go:130] > ffe27a6ccf80fc83aa095c1981ef41d89878447fbeff8ce50858c52630c320ae
	I0916 14:14:18.092055  753338 command_runner.go:130] > 70cdfc29b297091f9e9077b3d8748dc5e8b5154ad036d17d7e2e57fb6a90053a
	I0916 14:14:18.092060  753338 command_runner.go:130] > 3e77a439b0e91f9361c7cea812056c652d036a67b2c7e9cf555850d1b1cc43c0
	I0916 14:14:18.092067  753338 command_runner.go:130] > b4d468e417dd8afa14df3147175ab51461c530334533dceb411e6decb152c690
	I0916 14:14:18.092090  753338 cri.go:89] found id: "038d0db591c9e5abc920c53e11e368e03ce9f5f56c252fe66d6adca7aecc76c7"
	I0916 14:14:18.092099  753338 cri.go:89] found id: "481d5f837d21d98e89bdf18bf25bb6d2f3c38cf20ef42ae7c231defcb5ab24e0"
	I0916 14:14:18.092102  753338 cri.go:89] found id: "ad6237280bcbc8d08d158841602d786f89ad8b2507cbf2211ac22fbfedfd244a"
	I0916 14:14:18.092108  753338 cri.go:89] found id: "9bbf062b56098221043af49349f3515a3514781797b5351608741e161512e0aa"
	I0916 14:14:18.092111  753338 cri.go:89] found id: "ffe27a6ccf80fc83aa095c1981ef41d89878447fbeff8ce50858c52630c320ae"
	I0916 14:14:18.092115  753338 cri.go:89] found id: "70cdfc29b297091f9e9077b3d8748dc5e8b5154ad036d17d7e2e57fb6a90053a"
	I0916 14:14:18.092119  753338 cri.go:89] found id: "3e77a439b0e91f9361c7cea812056c652d036a67b2c7e9cf555850d1b1cc43c0"
	I0916 14:14:18.092122  753338 cri.go:89] found id: "b4d468e417dd8afa14df3147175ab51461c530334533dceb411e6decb152c690"
	I0916 14:14:18.092125  753338 cri.go:89] found id: ""
	I0916 14:14:18.092166  753338 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 16 14:16:02 multinode-561755 crio[2716]: time="2024-09-16 14:16:02.847111395Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496162847091631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5457efd1-ea55-4815-a730-5d482b73123b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:16:02 multinode-561755 crio[2716]: time="2024-09-16 14:16:02.847973517Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=66e699ab-e068-4387-8360-779cd34028d2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:16:02 multinode-561755 crio[2716]: time="2024-09-16 14:16:02.848023407Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=66e699ab-e068-4387-8360-779cd34028d2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:16:02 multinode-561755 crio[2716]: time="2024-09-16 14:16:02.848519473Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7ecb82798b3b798b425f376ae69fceea56f0ff4ea945891f45ae8bdbe5d6159,PodSandboxId:edcf6a70f78f56150c7a606ed3ddba1d71be71bebe7a7272b781b0fcc0886c8f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726496098865377882,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-f9c5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 45b527f4-85bd-412f-ae54-bcce15672385,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58508acb748554e9375a653bdc562145ea2c3e24417df72dba1116bd07a16585,PodSandboxId:3758466f290e44a3a951ed6e050c24645b070ae2a9a92f7a19e85bbfb58b5f3d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726496065292622672,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t6sh4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97f87f14-777b-4513-95d8-c8f12b26a6db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b09e26e5b947e1ad4cc55a4d2eef52bc565981127aa468413be01645265181a,PodSandboxId:01046b7ed697e658c7b5337b9968ea7f890f54a7fd71b83d926bdc3acb1bd2e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726496065179644187,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 131fe3a3-c839-45e9-af8b-eb2775d07571,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adf4d60b2f20123ef96739e2427e8fdc83ffcbc541e89f974a0991aa5ef71cc4,PodSandboxId:923f430b712dcf0744d91386ed8b4b99be5d9d67bc6fa238665c123ca09273f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726496065197877158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qgmxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f4aec2-b2c0-4d56-8d4b-03aefe3855d1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{
\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6732202a9735ad240ad594daeba3c99acbd6041fb5330c5414718e5a2531b5eb,PodSandboxId:3d1f57513971b5f066cd08d5820a0b5e39de65af5978e46bd85adae9b1d12d1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726496065128155803,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fz92k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef55133b-6cf4-4131-b485-69d699df4f0e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c48dc4407b5424dafcfc720fbc1d0b916236aadc82242cdc895ec6156be7f2,PodSandboxId:c70dd677d11fa597d6ec78222b711e79639b3938ad7d47bb86f3acbbdea63a5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726496060309122005,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed229b51306c92dd9accb501990f07f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2341c5103f3a7f29647d1871e30f4f764af2fea16ad0d761abd5df235ac593,PodSandboxId:f0ec96fa05021b468b9fa53034fe3a6b121b49539029a59cab7770d299c85ae2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726496060317999658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef3cfc34b5d3923d5c251e8166c360e2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b454d7bb255716c709ab1373da81c6a5f05e50514d5f91d32c0590f8413eba04,PodSandboxId:c470884ef1926ebb19164fb75f7de177126a80beb130e9d7d5534f3eb77a1413,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726496060242162281,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756c6d67643bb2ee70f40f961c02d740,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c82f6eb6f5f32910d140da76d9260949ee3401895d9edebe51c819564f920427,PodSandboxId:4baa3d76d236be2f97bd8c0ee80559a046e7c5263fada00baad316d4c6011978,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726496060234639826,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b5a13669d3298607e02d3516913a0e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f142b37f6ad4736ca2c31a48410fb0a0763cfb3e5f326abdb05f4d160d17137d,PodSandboxId:ca61839e3c0689bd7a36b88485c1f9710edfab094c7b0f910c9d04fd33f6c78b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726495738198748013,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-f9c5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 45b527f4-85bd-412f-ae54-bcce15672385,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038d0db591c9e5abc920c53e11e368e03ce9f5f56c252fe66d6adca7aecc76c7,PodSandboxId:ddc303bed82d694be4fdb59d47e89bf53a38cda5349e4c6ddf817f0d0bc6f0e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726495684302842989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qgmxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f4aec2-b2c0-4d56-8d4b-03aefe3855d1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:481d5f837d21d98e89bdf18bf25bb6d2f3c38cf20ef42ae7c231defcb5ab24e0,PodSandboxId:01cf246881f39153fbcdf2784b9c50c5a67d1281b40e6236d0d1e375223857b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726495684257143168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 131fe3a3-c839-45e9-af8b-eb2775d07571,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6237280bcbc8d08d158841602d786f89ad8b2507cbf2211ac22fbfedfd244a,PodSandboxId:020a5cb0db3168c3b25b11970fcb1c1ad28cf13d1b6b50dc2b548f9a98a77a11,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726495672528979913,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t6sh4,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 97f87f14-777b-4513-95d8-c8f12b26a6db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bbf062b56098221043af49349f3515a3514781797b5351608741e161512e0aa,PodSandboxId:23d0c9f0f0ead39e68d0053b8386167c02b312e7a2474465a4d48f17effb9502,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726495672364897137,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fz92k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef55133b-6cf4-4131-b485
-69d699df4f0e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e77a439b0e91f9361c7cea812056c652d036a67b2c7e9cf555850d1b1cc43c0,PodSandboxId:34c92d6e01422106edf0e9a26d493fb92c99d33a667e42c6ea7b220e6de1a1d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726495661966083067,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef3cfc34b5d3923d5c251e8166c360e2,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffe27a6ccf80fc83aa095c1981ef41d89878447fbeff8ce50858c52630c320ae,PodSandboxId:6d3d28ba2d9406cb9cb83c3bf908c731c255da93e54452d7502a7044d19e33dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726495661987657499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed229b51306c92dd9accb501990f07f,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70cdfc29b297091f9e9077b3d8748dc5e8b5154ad036d17d7e2e57fb6a90053a,PodSandboxId:f5f8f6ffee17612e8e16f3f376d0b36addedfe5531cd4cd5bd68f8157cfd394c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726495661985766604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756c6d67643bb2ee70f40f961c02d740,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d468e417dd8afa14df3147175ab51461c530334533dceb411e6decb152c690,PodSandboxId:f6a1100542c6686c11f8bcf8c484bec30f0b16f87a0529598cb2074670af80b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726495661919311623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b5a13669d3298607e02d3516913a0e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=66e699ab-e068-4387-8360-779cd34028d2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:16:02 multinode-561755 crio[2716]: time="2024-09-16 14:16:02.888336051Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6598b1c8-bf23-471e-bb3e-f995c3893368 name=/runtime.v1.RuntimeService/Version
	Sep 16 14:16:02 multinode-561755 crio[2716]: time="2024-09-16 14:16:02.888536447Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6598b1c8-bf23-471e-bb3e-f995c3893368 name=/runtime.v1.RuntimeService/Version
	Sep 16 14:16:02 multinode-561755 crio[2716]: time="2024-09-16 14:16:02.890125174Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b66e87b-300e-4e11-a091-266e74408e1e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:16:02 multinode-561755 crio[2716]: time="2024-09-16 14:16:02.890605112Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496162890580749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b66e87b-300e-4e11-a091-266e74408e1e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:16:02 multinode-561755 crio[2716]: time="2024-09-16 14:16:02.891051419Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d75958ac-c144-4362-8650-85642a039631 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:16:02 multinode-561755 crio[2716]: time="2024-09-16 14:16:02.891105682Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d75958ac-c144-4362-8650-85642a039631 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:16:02 multinode-561755 crio[2716]: time="2024-09-16 14:16:02.891480836Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7ecb82798b3b798b425f376ae69fceea56f0ff4ea945891f45ae8bdbe5d6159,PodSandboxId:edcf6a70f78f56150c7a606ed3ddba1d71be71bebe7a7272b781b0fcc0886c8f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726496098865377882,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-f9c5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 45b527f4-85bd-412f-ae54-bcce15672385,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58508acb748554e9375a653bdc562145ea2c3e24417df72dba1116bd07a16585,PodSandboxId:3758466f290e44a3a951ed6e050c24645b070ae2a9a92f7a19e85bbfb58b5f3d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726496065292622672,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t6sh4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97f87f14-777b-4513-95d8-c8f12b26a6db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b09e26e5b947e1ad4cc55a4d2eef52bc565981127aa468413be01645265181a,PodSandboxId:01046b7ed697e658c7b5337b9968ea7f890f54a7fd71b83d926bdc3acb1bd2e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726496065179644187,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 131fe3a3-c839-45e9-af8b-eb2775d07571,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adf4d60b2f20123ef96739e2427e8fdc83ffcbc541e89f974a0991aa5ef71cc4,PodSandboxId:923f430b712dcf0744d91386ed8b4b99be5d9d67bc6fa238665c123ca09273f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726496065197877158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qgmxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f4aec2-b2c0-4d56-8d4b-03aefe3855d1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{
\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6732202a9735ad240ad594daeba3c99acbd6041fb5330c5414718e5a2531b5eb,PodSandboxId:3d1f57513971b5f066cd08d5820a0b5e39de65af5978e46bd85adae9b1d12d1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726496065128155803,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fz92k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef55133b-6cf4-4131-b485-69d699df4f0e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c48dc4407b5424dafcfc720fbc1d0b916236aadc82242cdc895ec6156be7f2,PodSandboxId:c70dd677d11fa597d6ec78222b711e79639b3938ad7d47bb86f3acbbdea63a5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726496060309122005,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed229b51306c92dd9accb501990f07f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2341c5103f3a7f29647d1871e30f4f764af2fea16ad0d761abd5df235ac593,PodSandboxId:f0ec96fa05021b468b9fa53034fe3a6b121b49539029a59cab7770d299c85ae2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726496060317999658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef3cfc34b5d3923d5c251e8166c360e2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b454d7bb255716c709ab1373da81c6a5f05e50514d5f91d32c0590f8413eba04,PodSandboxId:c470884ef1926ebb19164fb75f7de177126a80beb130e9d7d5534f3eb77a1413,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726496060242162281,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756c6d67643bb2ee70f40f961c02d740,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c82f6eb6f5f32910d140da76d9260949ee3401895d9edebe51c819564f920427,PodSandboxId:4baa3d76d236be2f97bd8c0ee80559a046e7c5263fada00baad316d4c6011978,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726496060234639826,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b5a13669d3298607e02d3516913a0e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f142b37f6ad4736ca2c31a48410fb0a0763cfb3e5f326abdb05f4d160d17137d,PodSandboxId:ca61839e3c0689bd7a36b88485c1f9710edfab094c7b0f910c9d04fd33f6c78b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726495738198748013,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-f9c5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 45b527f4-85bd-412f-ae54-bcce15672385,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038d0db591c9e5abc920c53e11e368e03ce9f5f56c252fe66d6adca7aecc76c7,PodSandboxId:ddc303bed82d694be4fdb59d47e89bf53a38cda5349e4c6ddf817f0d0bc6f0e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726495684302842989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qgmxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f4aec2-b2c0-4d56-8d4b-03aefe3855d1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:481d5f837d21d98e89bdf18bf25bb6d2f3c38cf20ef42ae7c231defcb5ab24e0,PodSandboxId:01cf246881f39153fbcdf2784b9c50c5a67d1281b40e6236d0d1e375223857b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726495684257143168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 131fe3a3-c839-45e9-af8b-eb2775d07571,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6237280bcbc8d08d158841602d786f89ad8b2507cbf2211ac22fbfedfd244a,PodSandboxId:020a5cb0db3168c3b25b11970fcb1c1ad28cf13d1b6b50dc2b548f9a98a77a11,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726495672528979913,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t6sh4,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 97f87f14-777b-4513-95d8-c8f12b26a6db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bbf062b56098221043af49349f3515a3514781797b5351608741e161512e0aa,PodSandboxId:23d0c9f0f0ead39e68d0053b8386167c02b312e7a2474465a4d48f17effb9502,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726495672364897137,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fz92k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef55133b-6cf4-4131-b485
-69d699df4f0e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e77a439b0e91f9361c7cea812056c652d036a67b2c7e9cf555850d1b1cc43c0,PodSandboxId:34c92d6e01422106edf0e9a26d493fb92c99d33a667e42c6ea7b220e6de1a1d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726495661966083067,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef3cfc34b5d3923d5c251e8166c360e2,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffe27a6ccf80fc83aa095c1981ef41d89878447fbeff8ce50858c52630c320ae,PodSandboxId:6d3d28ba2d9406cb9cb83c3bf908c731c255da93e54452d7502a7044d19e33dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726495661987657499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed229b51306c92dd9accb501990f07f,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70cdfc29b297091f9e9077b3d8748dc5e8b5154ad036d17d7e2e57fb6a90053a,PodSandboxId:f5f8f6ffee17612e8e16f3f376d0b36addedfe5531cd4cd5bd68f8157cfd394c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726495661985766604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756c6d67643bb2ee70f40f961c02d740,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d468e417dd8afa14df3147175ab51461c530334533dceb411e6decb152c690,PodSandboxId:f6a1100542c6686c11f8bcf8c484bec30f0b16f87a0529598cb2074670af80b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726495661919311623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b5a13669d3298607e02d3516913a0e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d75958ac-c144-4362-8650-85642a039631 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:16:02 multinode-561755 crio[2716]: time="2024-09-16 14:16:02.930854832Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cbde57e0-f5ff-4f27-9e10-81a7f3e185c0 name=/runtime.v1.RuntimeService/Version
	Sep 16 14:16:02 multinode-561755 crio[2716]: time="2024-09-16 14:16:02.930945902Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cbde57e0-f5ff-4f27-9e10-81a7f3e185c0 name=/runtime.v1.RuntimeService/Version
	Sep 16 14:16:02 multinode-561755 crio[2716]: time="2024-09-16 14:16:02.931887845Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4f6f5bc5-0424-4f97-8593-44b1a5c3c8f5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:16:02 multinode-561755 crio[2716]: time="2024-09-16 14:16:02.932345964Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496162932322591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4f6f5bc5-0424-4f97-8593-44b1a5c3c8f5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:16:02 multinode-561755 crio[2716]: time="2024-09-16 14:16:02.932790221Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f3603e4f-aa10-4ac5-95cd-c68d48ad46d1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:16:02 multinode-561755 crio[2716]: time="2024-09-16 14:16:02.932846185Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3603e4f-aa10-4ac5-95cd-c68d48ad46d1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:16:02 multinode-561755 crio[2716]: time="2024-09-16 14:16:02.933305352Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7ecb82798b3b798b425f376ae69fceea56f0ff4ea945891f45ae8bdbe5d6159,PodSandboxId:edcf6a70f78f56150c7a606ed3ddba1d71be71bebe7a7272b781b0fcc0886c8f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726496098865377882,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-f9c5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 45b527f4-85bd-412f-ae54-bcce15672385,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58508acb748554e9375a653bdc562145ea2c3e24417df72dba1116bd07a16585,PodSandboxId:3758466f290e44a3a951ed6e050c24645b070ae2a9a92f7a19e85bbfb58b5f3d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726496065292622672,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t6sh4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97f87f14-777b-4513-95d8-c8f12b26a6db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b09e26e5b947e1ad4cc55a4d2eef52bc565981127aa468413be01645265181a,PodSandboxId:01046b7ed697e658c7b5337b9968ea7f890f54a7fd71b83d926bdc3acb1bd2e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726496065179644187,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 131fe3a3-c839-45e9-af8b-eb2775d07571,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adf4d60b2f20123ef96739e2427e8fdc83ffcbc541e89f974a0991aa5ef71cc4,PodSandboxId:923f430b712dcf0744d91386ed8b4b99be5d9d67bc6fa238665c123ca09273f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726496065197877158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qgmxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f4aec2-b2c0-4d56-8d4b-03aefe3855d1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{
\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6732202a9735ad240ad594daeba3c99acbd6041fb5330c5414718e5a2531b5eb,PodSandboxId:3d1f57513971b5f066cd08d5820a0b5e39de65af5978e46bd85adae9b1d12d1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726496065128155803,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fz92k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef55133b-6cf4-4131-b485-69d699df4f0e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c48dc4407b5424dafcfc720fbc1d0b916236aadc82242cdc895ec6156be7f2,PodSandboxId:c70dd677d11fa597d6ec78222b711e79639b3938ad7d47bb86f3acbbdea63a5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726496060309122005,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed229b51306c92dd9accb501990f07f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2341c5103f3a7f29647d1871e30f4f764af2fea16ad0d761abd5df235ac593,PodSandboxId:f0ec96fa05021b468b9fa53034fe3a6b121b49539029a59cab7770d299c85ae2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726496060317999658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef3cfc34b5d3923d5c251e8166c360e2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b454d7bb255716c709ab1373da81c6a5f05e50514d5f91d32c0590f8413eba04,PodSandboxId:c470884ef1926ebb19164fb75f7de177126a80beb130e9d7d5534f3eb77a1413,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726496060242162281,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756c6d67643bb2ee70f40f961c02d740,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c82f6eb6f5f32910d140da76d9260949ee3401895d9edebe51c819564f920427,PodSandboxId:4baa3d76d236be2f97bd8c0ee80559a046e7c5263fada00baad316d4c6011978,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726496060234639826,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b5a13669d3298607e02d3516913a0e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f142b37f6ad4736ca2c31a48410fb0a0763cfb3e5f326abdb05f4d160d17137d,PodSandboxId:ca61839e3c0689bd7a36b88485c1f9710edfab094c7b0f910c9d04fd33f6c78b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726495738198748013,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-f9c5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 45b527f4-85bd-412f-ae54-bcce15672385,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038d0db591c9e5abc920c53e11e368e03ce9f5f56c252fe66d6adca7aecc76c7,PodSandboxId:ddc303bed82d694be4fdb59d47e89bf53a38cda5349e4c6ddf817f0d0bc6f0e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726495684302842989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qgmxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f4aec2-b2c0-4d56-8d4b-03aefe3855d1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:481d5f837d21d98e89bdf18bf25bb6d2f3c38cf20ef42ae7c231defcb5ab24e0,PodSandboxId:01cf246881f39153fbcdf2784b9c50c5a67d1281b40e6236d0d1e375223857b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726495684257143168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 131fe3a3-c839-45e9-af8b-eb2775d07571,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6237280bcbc8d08d158841602d786f89ad8b2507cbf2211ac22fbfedfd244a,PodSandboxId:020a5cb0db3168c3b25b11970fcb1c1ad28cf13d1b6b50dc2b548f9a98a77a11,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726495672528979913,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t6sh4,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 97f87f14-777b-4513-95d8-c8f12b26a6db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bbf062b56098221043af49349f3515a3514781797b5351608741e161512e0aa,PodSandboxId:23d0c9f0f0ead39e68d0053b8386167c02b312e7a2474465a4d48f17effb9502,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726495672364897137,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fz92k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef55133b-6cf4-4131-b485
-69d699df4f0e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e77a439b0e91f9361c7cea812056c652d036a67b2c7e9cf555850d1b1cc43c0,PodSandboxId:34c92d6e01422106edf0e9a26d493fb92c99d33a667e42c6ea7b220e6de1a1d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726495661966083067,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef3cfc34b5d3923d5c251e8166c360e2,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffe27a6ccf80fc83aa095c1981ef41d89878447fbeff8ce50858c52630c320ae,PodSandboxId:6d3d28ba2d9406cb9cb83c3bf908c731c255da93e54452d7502a7044d19e33dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726495661987657499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed229b51306c92dd9accb501990f07f,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70cdfc29b297091f9e9077b3d8748dc5e8b5154ad036d17d7e2e57fb6a90053a,PodSandboxId:f5f8f6ffee17612e8e16f3f376d0b36addedfe5531cd4cd5bd68f8157cfd394c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726495661985766604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756c6d67643bb2ee70f40f961c02d740,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d468e417dd8afa14df3147175ab51461c530334533dceb411e6decb152c690,PodSandboxId:f6a1100542c6686c11f8bcf8c484bec30f0b16f87a0529598cb2074670af80b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726495661919311623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b5a13669d3298607e02d3516913a0e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f3603e4f-aa10-4ac5-95cd-c68d48ad46d1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:16:02 multinode-561755 crio[2716]: time="2024-09-16 14:16:02.973445057Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=570d9bfe-9628-4b48-b7cb-6eeaefa47be7 name=/runtime.v1.RuntimeService/Version
	Sep 16 14:16:02 multinode-561755 crio[2716]: time="2024-09-16 14:16:02.973510208Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=570d9bfe-9628-4b48-b7cb-6eeaefa47be7 name=/runtime.v1.RuntimeService/Version
	Sep 16 14:16:02 multinode-561755 crio[2716]: time="2024-09-16 14:16:02.974724065Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3a427797-d67e-4885-8697-15e48ff8aadf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:16:02 multinode-561755 crio[2716]: time="2024-09-16 14:16:02.975913037Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496162975889076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a427797-d67e-4885-8697-15e48ff8aadf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:16:02 multinode-561755 crio[2716]: time="2024-09-16 14:16:02.976699550Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=956df5e9-79d1-44df-847b-eafcca416294 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:16:02 multinode-561755 crio[2716]: time="2024-09-16 14:16:02.976815089Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=956df5e9-79d1-44df-847b-eafcca416294 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:16:02 multinode-561755 crio[2716]: time="2024-09-16 14:16:02.978793418Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7ecb82798b3b798b425f376ae69fceea56f0ff4ea945891f45ae8bdbe5d6159,PodSandboxId:edcf6a70f78f56150c7a606ed3ddba1d71be71bebe7a7272b781b0fcc0886c8f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726496098865377882,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-f9c5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 45b527f4-85bd-412f-ae54-bcce15672385,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58508acb748554e9375a653bdc562145ea2c3e24417df72dba1116bd07a16585,PodSandboxId:3758466f290e44a3a951ed6e050c24645b070ae2a9a92f7a19e85bbfb58b5f3d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726496065292622672,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t6sh4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97f87f14-777b-4513-95d8-c8f12b26a6db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b09e26e5b947e1ad4cc55a4d2eef52bc565981127aa468413be01645265181a,PodSandboxId:01046b7ed697e658c7b5337b9968ea7f890f54a7fd71b83d926bdc3acb1bd2e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726496065179644187,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 131fe3a3-c839-45e9-af8b-eb2775d07571,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adf4d60b2f20123ef96739e2427e8fdc83ffcbc541e89f974a0991aa5ef71cc4,PodSandboxId:923f430b712dcf0744d91386ed8b4b99be5d9d67bc6fa238665c123ca09273f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726496065197877158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qgmxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f4aec2-b2c0-4d56-8d4b-03aefe3855d1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{
\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6732202a9735ad240ad594daeba3c99acbd6041fb5330c5414718e5a2531b5eb,PodSandboxId:3d1f57513971b5f066cd08d5820a0b5e39de65af5978e46bd85adae9b1d12d1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726496065128155803,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fz92k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef55133b-6cf4-4131-b485-69d699df4f0e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c48dc4407b5424dafcfc720fbc1d0b916236aadc82242cdc895ec6156be7f2,PodSandboxId:c70dd677d11fa597d6ec78222b711e79639b3938ad7d47bb86f3acbbdea63a5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726496060309122005,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed229b51306c92dd9accb501990f07f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2341c5103f3a7f29647d1871e30f4f764af2fea16ad0d761abd5df235ac593,PodSandboxId:f0ec96fa05021b468b9fa53034fe3a6b121b49539029a59cab7770d299c85ae2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726496060317999658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef3cfc34b5d3923d5c251e8166c360e2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b454d7bb255716c709ab1373da81c6a5f05e50514d5f91d32c0590f8413eba04,PodSandboxId:c470884ef1926ebb19164fb75f7de177126a80beb130e9d7d5534f3eb77a1413,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726496060242162281,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756c6d67643bb2ee70f40f961c02d740,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c82f6eb6f5f32910d140da76d9260949ee3401895d9edebe51c819564f920427,PodSandboxId:4baa3d76d236be2f97bd8c0ee80559a046e7c5263fada00baad316d4c6011978,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726496060234639826,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b5a13669d3298607e02d3516913a0e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f142b37f6ad4736ca2c31a48410fb0a0763cfb3e5f326abdb05f4d160d17137d,PodSandboxId:ca61839e3c0689bd7a36b88485c1f9710edfab094c7b0f910c9d04fd33f6c78b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726495738198748013,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-f9c5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 45b527f4-85bd-412f-ae54-bcce15672385,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038d0db591c9e5abc920c53e11e368e03ce9f5f56c252fe66d6adca7aecc76c7,PodSandboxId:ddc303bed82d694be4fdb59d47e89bf53a38cda5349e4c6ddf817f0d0bc6f0e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726495684302842989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qgmxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f4aec2-b2c0-4d56-8d4b-03aefe3855d1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:481d5f837d21d98e89bdf18bf25bb6d2f3c38cf20ef42ae7c231defcb5ab24e0,PodSandboxId:01cf246881f39153fbcdf2784b9c50c5a67d1281b40e6236d0d1e375223857b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726495684257143168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 131fe3a3-c839-45e9-af8b-eb2775d07571,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6237280bcbc8d08d158841602d786f89ad8b2507cbf2211ac22fbfedfd244a,PodSandboxId:020a5cb0db3168c3b25b11970fcb1c1ad28cf13d1b6b50dc2b548f9a98a77a11,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726495672528979913,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t6sh4,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 97f87f14-777b-4513-95d8-c8f12b26a6db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bbf062b56098221043af49349f3515a3514781797b5351608741e161512e0aa,PodSandboxId:23d0c9f0f0ead39e68d0053b8386167c02b312e7a2474465a4d48f17effb9502,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726495672364897137,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fz92k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef55133b-6cf4-4131-b485
-69d699df4f0e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e77a439b0e91f9361c7cea812056c652d036a67b2c7e9cf555850d1b1cc43c0,PodSandboxId:34c92d6e01422106edf0e9a26d493fb92c99d33a667e42c6ea7b220e6de1a1d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726495661966083067,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef3cfc34b5d3923d5c251e8166c360e2,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffe27a6ccf80fc83aa095c1981ef41d89878447fbeff8ce50858c52630c320ae,PodSandboxId:6d3d28ba2d9406cb9cb83c3bf908c731c255da93e54452d7502a7044d19e33dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726495661987657499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed229b51306c92dd9accb501990f07f,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70cdfc29b297091f9e9077b3d8748dc5e8b5154ad036d17d7e2e57fb6a90053a,PodSandboxId:f5f8f6ffee17612e8e16f3f376d0b36addedfe5531cd4cd5bd68f8157cfd394c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726495661985766604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756c6d67643bb2ee70f40f961c02d740,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d468e417dd8afa14df3147175ab51461c530334533dceb411e6decb152c690,PodSandboxId:f6a1100542c6686c11f8bcf8c484bec30f0b16f87a0529598cb2074670af80b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726495661919311623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b5a13669d3298607e02d3516913a0e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=956df5e9-79d1-44df-847b-eafcca416294 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f7ecb82798b3b       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   edcf6a70f78f5       busybox-7dff88458-f9c5w
	58508acb74855       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   3758466f290e4       kindnet-t6sh4
	adf4d60b2f201       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   1                   923f430b712dc       coredns-7c65d6cfc9-qgmxs
	7b09e26e5b947       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   01046b7ed697e       storage-provisioner
	6732202a9735a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                1                   3d1f57513971b       kube-proxy-fz92k
	3d2341c5103f3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   f0ec96fa05021       etcd-multinode-561755
	32c48dc4407b5       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      About a minute ago   Running             kube-scheduler            1                   c70dd677d11fa       kube-scheduler-multinode-561755
	b454d7bb25571       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            1                   c470884ef1926       kube-apiserver-multinode-561755
	c82f6eb6f5f32       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   1                   4baa3d76d236b       kube-controller-manager-multinode-561755
	f142b37f6ad47       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   ca61839e3c068       busybox-7dff88458-f9c5w
	038d0db591c9e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      7 minutes ago        Exited              coredns                   0                   ddc303bed82d6       coredns-7c65d6cfc9-qgmxs
	481d5f837d21d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   01cf246881f39       storage-provisioner
	ad6237280bcbc       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      8 minutes ago        Exited              kindnet-cni               0                   020a5cb0db316       kindnet-t6sh4
	9bbf062b56098       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      8 minutes ago        Exited              kube-proxy                0                   23d0c9f0f0ead       kube-proxy-fz92k
	ffe27a6ccf80f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      8 minutes ago        Exited              kube-scheduler            0                   6d3d28ba2d940       kube-scheduler-multinode-561755
	70cdfc29b2970       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      8 minutes ago        Exited              kube-apiserver            0                   f5f8f6ffee176       kube-apiserver-multinode-561755
	3e77a439b0e91       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   34c92d6e01422       etcd-multinode-561755
	b4d468e417dd8       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      8 minutes ago        Exited              kube-controller-manager   0                   f6a1100542c66       kube-controller-manager-multinode-561755
	
	
	==> coredns [038d0db591c9e5abc920c53e11e368e03ce9f5f56c252fe66d6adca7aecc76c7] <==
	[INFO] 10.244.1.2:38653 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001613273s
	[INFO] 10.244.1.2:52874 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000146113s
	[INFO] 10.244.1.2:32874 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077559s
	[INFO] 10.244.1.2:57140 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001054421s
	[INFO] 10.244.1.2:34864 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072288s
	[INFO] 10.244.1.2:32985 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006605s
	[INFO] 10.244.1.2:54940 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062877s
	[INFO] 10.244.0.3:38082 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137423s
	[INFO] 10.244.0.3:40392 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000053684s
	[INFO] 10.244.0.3:39986 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105196s
	[INFO] 10.244.0.3:43189 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000036357s
	[INFO] 10.244.1.2:32802 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014118s
	[INFO] 10.244.1.2:46476 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000121807s
	[INFO] 10.244.1.2:46921 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097849s
	[INFO] 10.244.1.2:46714 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121275s
	[INFO] 10.244.0.3:57079 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132786s
	[INFO] 10.244.0.3:49020 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000170154s
	[INFO] 10.244.0.3:60501 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000125852s
	[INFO] 10.244.0.3:48526 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120325s
	[INFO] 10.244.1.2:33299 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146427s
	[INFO] 10.244.1.2:43843 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000110016s
	[INFO] 10.244.1.2:49995 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000104189s
	[INFO] 10.244.1.2:54004 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000092229s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [adf4d60b2f20123ef96739e2427e8fdc83ffcbc541e89f974a0991aa5ef71cc4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37252 - 21208 "HINFO IN 2766008737970293421.9180525390571247957. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010463352s
	
	
	==> describe nodes <==
	Name:               multinode-561755
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-561755
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86
	                    minikube.k8s.io/name=multinode-561755
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T14_07_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 14:07:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-561755
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 14:15:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 14:14:23 +0000   Mon, 16 Sep 2024 14:07:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 14:14:23 +0000   Mon, 16 Sep 2024 14:07:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 14:14:23 +0000   Mon, 16 Sep 2024 14:07:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 14:14:23 +0000   Mon, 16 Sep 2024 14:08:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.163
	  Hostname:    multinode-561755
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 abfdd7c763814fb7a99004bb6a18a7f4
	  System UUID:                abfdd7c7-6381-4fb7-a990-04bb6a18a7f4
	  Boot ID:                    d00a5a85-3106-449c-943b-e325316e5e8d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-f9c5w                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m8s
	  kube-system                 coredns-7c65d6cfc9-qgmxs                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m11s
	  kube-system                 etcd-multinode-561755                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m16s
	  kube-system                 kindnet-t6sh4                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m12s
	  kube-system                 kube-apiserver-multinode-561755             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m17s
	  kube-system                 kube-controller-manager-multinode-561755    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m16s
	  kube-system                 kube-proxy-fz92k                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m12s
	  kube-system                 kube-scheduler-multinode-561755             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m16s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m10s                  kube-proxy       
	  Normal  Starting                 97s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  8m22s (x8 over 8m22s)  kubelet          Node multinode-561755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m22s (x8 over 8m22s)  kubelet          Node multinode-561755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m22s (x7 over 8m22s)  kubelet          Node multinode-561755 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m16s                  kubelet          Node multinode-561755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m16s                  kubelet          Node multinode-561755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m16s                  kubelet          Node multinode-561755 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m12s                  node-controller  Node multinode-561755 event: Registered Node multinode-561755 in Controller
	  Normal  NodeReady                8m                     kubelet          Node multinode-561755 status is now: NodeReady
	  Normal  Starting                 104s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  104s (x8 over 104s)    kubelet          Node multinode-561755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s (x8 over 104s)    kubelet          Node multinode-561755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s (x7 over 104s)    kubelet          Node multinode-561755 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  104s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           96s                    node-controller  Node multinode-561755 event: Registered Node multinode-561755 in Controller
	
	
	Name:               multinode-561755-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-561755-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86
	                    minikube.k8s.io/name=multinode-561755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T14_15_06_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 14:15:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-561755-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 14:15:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 14:15:36 +0000   Mon, 16 Sep 2024 14:15:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 14:15:36 +0000   Mon, 16 Sep 2024 14:15:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 14:15:36 +0000   Mon, 16 Sep 2024 14:15:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 14:15:36 +0000   Mon, 16 Sep 2024 14:15:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.34
	  Hostname:    multinode-561755-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a3d12e8cd78542e197df8ad303b2b9a0
	  System UUID:                a3d12e8c-d785-42e1-97df-8ad303b2b9a0
	  Boot ID:                    6181806d-5667-41f5-9bf7-9bb25344fc91
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cwk54    0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kindnet-8qqj5              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m28s
	  kube-system                 kube-proxy-dgsnj           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m23s                  kube-proxy  
	  Normal  Starting                 52s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m29s (x2 over 7m29s)  kubelet     Node multinode-561755-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m29s (x2 over 7m29s)  kubelet     Node multinode-561755-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m29s (x2 over 7m29s)  kubelet     Node multinode-561755-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m29s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m10s                  kubelet     Node multinode-561755-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  58s (x2 over 58s)      kubelet     Node multinode-561755-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x2 over 58s)      kubelet     Node multinode-561755-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x2 over 58s)      kubelet     Node multinode-561755-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  58s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                40s                    kubelet     Node multinode-561755-m02 status is now: NodeReady
	
	
	Name:               multinode-561755-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-561755-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86
	                    minikube.k8s.io/name=multinode-561755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T14_15_42_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 14:15:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-561755-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 14:16:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 14:16:00 +0000   Mon, 16 Sep 2024 14:15:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 14:16:00 +0000   Mon, 16 Sep 2024 14:15:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 14:16:00 +0000   Mon, 16 Sep 2024 14:15:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 14:16:00 +0000   Mon, 16 Sep 2024 14:16:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.132
	  Hostname:    multinode-561755-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 eb76e9913b8b4d65894b670714ba5e9e
	  System UUID:                eb76e991-3b8b-4d65-894b-670714ba5e9e
	  Boot ID:                    51d7837e-7e58-420e-a64c-313ae189261e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-mc7zk       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m29s
	  kube-system                 kube-proxy-kd8nx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m37s                  kube-proxy       
	  Normal  Starting                 6m23s                  kube-proxy       
	  Normal  Starting                 16s                    kube-proxy       
	  Normal  Starting                 6m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m29s (x2 over 6m29s)  kubelet          Node multinode-561755-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m29s (x2 over 6m29s)  kubelet          Node multinode-561755-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m29s (x2 over 6m29s)  kubelet          Node multinode-561755-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m11s                  kubelet          Node multinode-561755-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m42s (x2 over 5m42s)  kubelet          Node multinode-561755-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m42s (x2 over 5m42s)  kubelet          Node multinode-561755-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m42s (x2 over 5m42s)  kubelet          Node multinode-561755-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m42s                  kubelet          Starting kubelet.
	  Normal  NodeReady                5m24s                  kubelet          Node multinode-561755-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  21s (x2 over 21s)      kubelet          Node multinode-561755-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x2 over 21s)      kubelet          Node multinode-561755-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x2 over 21s)      kubelet          Node multinode-561755-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16s                    node-controller  Node multinode-561755-m03 event: Registered Node multinode-561755-m03 in Controller
	  Normal  NodeReady                3s                     kubelet          Node multinode-561755-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.056960] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063741] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.173445] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.128851] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.285816] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.801844] systemd-fstab-generator[749]: Ignoring "noauto" option for root device
	[  +4.283668] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.054383] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.990944] systemd-fstab-generator[1220]: Ignoring "noauto" option for root device
	[  +0.075184] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.611490] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.858125] kauditd_printk_skb: 46 callbacks suppressed
	[Sep16 14:08] kauditd_printk_skb: 41 callbacks suppressed
	[ +52.002975] kauditd_printk_skb: 12 callbacks suppressed
	[Sep16 14:14] systemd-fstab-generator[2639]: Ignoring "noauto" option for root device
	[  +0.137606] systemd-fstab-generator[2651]: Ignoring "noauto" option for root device
	[  +0.162607] systemd-fstab-generator[2665]: Ignoring "noauto" option for root device
	[  +0.137343] systemd-fstab-generator[2677]: Ignoring "noauto" option for root device
	[  +0.260613] systemd-fstab-generator[2705]: Ignoring "noauto" option for root device
	[  +0.638377] systemd-fstab-generator[2803]: Ignoring "noauto" option for root device
	[  +1.803436] systemd-fstab-generator[2926]: Ignoring "noauto" option for root device
	[  +5.705069] kauditd_printk_skb: 184 callbacks suppressed
	[  +9.632179] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.493793] systemd-fstab-generator[3758]: Ignoring "noauto" option for root device
	[ +17.636895] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [3d2341c5103f3a7f29647d1871e30f4f764af2fea16ad0d761abd5df235ac593] <==
	{"level":"info","ts":"2024-09-16T14:14:20.880858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a8a86752a40bcef4 switched to configuration voters=(12153077199096499956)"}
	{"level":"info","ts":"2024-09-16T14:14:20.887453Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e373eafcd5903e51","local-member-id":"a8a86752a40bcef4","added-peer-id":"a8a86752a40bcef4","added-peer-peer-urls":["https://192.168.39.163:2380"]}
	{"level":"info","ts":"2024-09-16T14:14:20.887617Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e373eafcd5903e51","local-member-id":"a8a86752a40bcef4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T14:14:20.887668Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T14:14:20.889024Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T14:14:20.891515Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"a8a86752a40bcef4","initial-advertise-peer-urls":["https://192.168.39.163:2380"],"listen-peer-urls":["https://192.168.39.163:2380"],"advertise-client-urls":["https://192.168.39.163:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.163:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T14:14:20.893261Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T14:14:20.893462Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.163:2380"}
	{"level":"info","ts":"2024-09-16T14:14:20.893492Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.163:2380"}
	{"level":"info","ts":"2024-09-16T14:14:22.294157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a8a86752a40bcef4 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T14:14:22.294260Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a8a86752a40bcef4 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T14:14:22.294303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a8a86752a40bcef4 received MsgPreVoteResp from a8a86752a40bcef4 at term 2"}
	{"level":"info","ts":"2024-09-16T14:14:22.294318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a8a86752a40bcef4 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T14:14:22.294323Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a8a86752a40bcef4 received MsgVoteResp from a8a86752a40bcef4 at term 3"}
	{"level":"info","ts":"2024-09-16T14:14:22.294331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a8a86752a40bcef4 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T14:14:22.294339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a8a86752a40bcef4 elected leader a8a86752a40bcef4 at term 3"}
	{"level":"info","ts":"2024-09-16T14:14:22.298920Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T14:14:22.299317Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T14:14:22.298911Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"a8a86752a40bcef4","local-member-attributes":"{Name:multinode-561755 ClientURLs:[https://192.168.39.163:2379]}","request-path":"/0/members/a8a86752a40bcef4/attributes","cluster-id":"e373eafcd5903e51","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T14:14:22.299824Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T14:14:22.299920Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T14:14:22.300082Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T14:14:22.300949Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T14:14:22.301928Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.163:2379"}
	{"level":"info","ts":"2024-09-16T14:14:22.300970Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [3e77a439b0e91f9361c7cea812056c652d036a67b2c7e9cf555850d1b1cc43c0] <==
	{"level":"info","ts":"2024-09-16T14:08:34.886634Z","caller":"traceutil/trace.go:171","msg":"trace[1736031726] transaction","detail":"{read_only:false; response_revision:469; number_of_response:1; }","duration":"241.928727ms","start":"2024-09-16T14:08:34.644687Z","end":"2024-09-16T14:08:34.886616Z","steps":["trace[1736031726] 'process raft request'  (duration: 236.954146ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T14:09:34.526798Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.642609ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14912704774043398584 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-561755-m03.17f5bec2657cec39\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-561755-m03.17f5bec2657cec39\" value_size:642 lease:5689332737188622451 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-16T14:09:34.526927Z","caller":"traceutil/trace.go:171","msg":"trace[634631698] linearizableReadLoop","detail":"{readStateIndex:644; appliedIndex:643; }","duration":"152.460443ms","start":"2024-09-16T14:09:34.374451Z","end":"2024-09-16T14:09:34.526912Z","steps":["trace[634631698] 'read index received'  (duration: 28.484µs)","trace[634631698] 'applied index is now lower than readState.Index'  (duration: 152.431265ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T14:09:34.527033Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.569603ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-561755-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T14:09:34.527073Z","caller":"traceutil/trace.go:171","msg":"trace[2116339976] range","detail":"{range_begin:/registry/minions/multinode-561755-m03; range_end:; response_count:0; response_revision:610; }","duration":"152.621541ms","start":"2024-09-16T14:09:34.374446Z","end":"2024-09-16T14:09:34.527068Z","steps":["trace[2116339976] 'agreement among raft nodes before linearized reading'  (duration: 152.512122ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T14:09:34.527107Z","caller":"traceutil/trace.go:171","msg":"trace[1146783470] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"204.229038ms","start":"2024-09-16T14:09:34.322800Z","end":"2024-09-16T14:09:34.527029Z","steps":["trace[1146783470] 'compare'  (duration: 199.513462ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T14:09:35.478034Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.715539ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-561755-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T14:09:35.478104Z","caller":"traceutil/trace.go:171","msg":"trace[317187513] range","detail":"{range_begin:/registry/csinodes/multinode-561755-m03; range_end:; response_count:0; response_revision:631; }","duration":"168.792037ms","start":"2024-09-16T14:09:35.309298Z","end":"2024-09-16T14:09:35.478090Z","steps":["trace[317187513] 'range keys from in-memory index tree'  (duration: 168.580181ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T14:09:35.660363Z","caller":"traceutil/trace.go:171","msg":"trace[1249465066] linearizableReadLoop","detail":"{readStateIndex:667; appliedIndex:666; }","duration":"128.147058ms","start":"2024-09-16T14:09:35.532203Z","end":"2024-09-16T14:09:35.660350Z","steps":["trace[1249465066] 'read index received'  (duration: 127.949608ms)","trace[1249465066] 'applied index is now lower than readState.Index'  (duration: 196.984µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T14:09:35.660521Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.317842ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-561755-m03\" ","response":"range_response_count:1 size:2824"}
	{"level":"info","ts":"2024-09-16T14:09:35.660551Z","caller":"traceutil/trace.go:171","msg":"trace[1442177822] range","detail":"{range_begin:/registry/minions/multinode-561755-m03; range_end:; response_count:1; response_revision:632; }","duration":"128.360965ms","start":"2024-09-16T14:09:35.532183Z","end":"2024-09-16T14:09:35.660544Z","steps":["trace[1442177822] 'agreement among raft nodes before linearized reading'  (duration: 128.233904ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T14:09:35.660752Z","caller":"traceutil/trace.go:171","msg":"trace[1289672176] transaction","detail":"{read_only:false; response_revision:632; number_of_response:1; }","duration":"177.587061ms","start":"2024-09-16T14:09:35.483157Z","end":"2024-09-16T14:09:35.660744Z","steps":["trace[1289672176] 'process raft request'  (duration: 177.032795ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T14:09:35.945139Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"243.140294ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T14:09:35.945323Z","caller":"traceutil/trace.go:171","msg":"trace[802709646] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:632; }","duration":"243.331255ms","start":"2024-09-16T14:09:35.701978Z","end":"2024-09-16T14:09:35.945310Z","steps":["trace[802709646] 'range keys from in-memory index tree'  (duration: 243.129774ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T14:12:45.060139Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T14:12:45.066174Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-561755","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.163:2380"],"advertise-client-urls":["https://192.168.39.163:2379"]}
	{"level":"warn","ts":"2024-09-16T14:12:45.070307Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T14:12:45.070437Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/09/16 14:12:45 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-16T14:12:45.147142Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.163:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T14:12:45.147217Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.163:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T14:12:45.147361Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"a8a86752a40bcef4","current-leader-member-id":"a8a86752a40bcef4"}
	{"level":"info","ts":"2024-09-16T14:12:45.149951Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.163:2380"}
	{"level":"info","ts":"2024-09-16T14:12:45.150101Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.163:2380"}
	{"level":"info","ts":"2024-09-16T14:12:45.150137Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-561755","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.163:2380"],"advertise-client-urls":["https://192.168.39.163:2379"]}
	
	
	==> kernel <==
	 14:16:03 up 8 min,  0 users,  load average: 0.04, 0.17, 0.12
	Linux multinode-561755 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [58508acb748554e9375a653bdc562145ea2c3e24417df72dba1116bd07a16585] <==
	I0916 14:15:16.313157       1 main.go:322] Node multinode-561755-m03 has CIDR [10.244.3.0/24] 
	I0916 14:15:26.313361       1 main.go:295] Handling node with IPs: map[192.168.39.34:{}]
	I0916 14:15:26.313472       1 main.go:322] Node multinode-561755-m02 has CIDR [10.244.1.0/24] 
	I0916 14:15:26.313674       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0916 14:15:26.313698       1 main.go:322] Node multinode-561755-m03 has CIDR [10.244.3.0/24] 
	I0916 14:15:26.313751       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0916 14:15:26.313777       1 main.go:299] handling current node
	I0916 14:15:36.321447       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0916 14:15:36.321535       1 main.go:299] handling current node
	I0916 14:15:36.321557       1 main.go:295] Handling node with IPs: map[192.168.39.34:{}]
	I0916 14:15:36.321565       1 main.go:322] Node multinode-561755-m02 has CIDR [10.244.1.0/24] 
	I0916 14:15:36.321804       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0916 14:15:36.321814       1 main.go:322] Node multinode-561755-m03 has CIDR [10.244.3.0/24] 
	I0916 14:15:46.312646       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0916 14:15:46.312706       1 main.go:299] handling current node
	I0916 14:15:46.312720       1 main.go:295] Handling node with IPs: map[192.168.39.34:{}]
	I0916 14:15:46.312726       1 main.go:322] Node multinode-561755-m02 has CIDR [10.244.1.0/24] 
	I0916 14:15:46.312856       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0916 14:15:46.312861       1 main.go:322] Node multinode-561755-m03 has CIDR [10.244.2.0/24] 
	I0916 14:15:56.313956       1 main.go:295] Handling node with IPs: map[192.168.39.34:{}]
	I0916 14:15:56.314034       1 main.go:322] Node multinode-561755-m02 has CIDR [10.244.1.0/24] 
	I0916 14:15:56.314199       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0916 14:15:56.314208       1 main.go:322] Node multinode-561755-m03 has CIDR [10.244.2.0/24] 
	I0916 14:15:56.314324       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0916 14:15:56.314350       1 main.go:299] handling current node
	
	
	==> kindnet [ad6237280bcbc8d08d158841602d786f89ad8b2507cbf2211ac22fbfedfd244a] <==
	I0916 14:12:03.525625       1 main.go:322] Node multinode-561755-m03 has CIDR [10.244.3.0/24] 
	I0916 14:12:13.526363       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0916 14:12:13.526493       1 main.go:299] handling current node
	I0916 14:12:13.526529       1 main.go:295] Handling node with IPs: map[192.168.39.34:{}]
	I0916 14:12:13.526549       1 main.go:322] Node multinode-561755-m02 has CIDR [10.244.1.0/24] 
	I0916 14:12:13.526690       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0916 14:12:13.526716       1 main.go:322] Node multinode-561755-m03 has CIDR [10.244.3.0/24] 
	I0916 14:12:23.525378       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0916 14:12:23.525478       1 main.go:299] handling current node
	I0916 14:12:23.525525       1 main.go:295] Handling node with IPs: map[192.168.39.34:{}]
	I0916 14:12:23.525535       1 main.go:322] Node multinode-561755-m02 has CIDR [10.244.1.0/24] 
	I0916 14:12:23.525718       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0916 14:12:23.525750       1 main.go:322] Node multinode-561755-m03 has CIDR [10.244.3.0/24] 
	I0916 14:12:33.516661       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0916 14:12:33.516713       1 main.go:299] handling current node
	I0916 14:12:33.516727       1 main.go:295] Handling node with IPs: map[192.168.39.34:{}]
	I0916 14:12:33.516734       1 main.go:322] Node multinode-561755-m02 has CIDR [10.244.1.0/24] 
	I0916 14:12:33.516880       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0916 14:12:33.516904       1 main.go:322] Node multinode-561755-m03 has CIDR [10.244.3.0/24] 
	I0916 14:12:43.516678       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0916 14:12:43.516773       1 main.go:322] Node multinode-561755-m03 has CIDR [10.244.3.0/24] 
	I0916 14:12:43.516915       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0916 14:12:43.516949       1 main.go:299] handling current node
	I0916 14:12:43.517014       1 main.go:295] Handling node with IPs: map[192.168.39.34:{}]
	I0916 14:12:43.517037       1 main.go:322] Node multinode-561755-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [70cdfc29b297091f9e9077b3d8748dc5e8b5154ad036d17d7e2e57fb6a90053a] <==
	E0916 14:09:01.158030       1 conn.go:339] Error on socket receive: read tcp 192.168.39.163:8443->192.168.39.1:59556: use of closed network connection
	E0916 14:09:01.323588       1 conn.go:339] Error on socket receive: read tcp 192.168.39.163:8443->192.168.39.1:59568: use of closed network connection
	I0916 14:12:45.060474       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0916 14:12:45.080591       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0916 14:12:45.080849       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 14:12:45.081057       1 storage_flowcontrol.go:186] APF bootstrap ensurer is exiting
	I0916 14:12:45.081166       1 cluster_authentication_trust_controller.go:466] Shutting down cluster_authentication_trust_controller controller
	I0916 14:12:45.082666       1 apiapproval_controller.go:201] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0916 14:12:45.082791       1 nonstructuralschema_controller.go:207] Shutting down NonStructuralSchemaConditionController
	I0916 14:12:45.082823       1 establishing_controller.go:92] Shutting down EstablishingController
	I0916 14:12:45.082839       1 naming_controller.go:305] Shutting down NamingConditionController
	I0916 14:12:45.082854       1 controller.go:120] Shutting down OpenAPI V3 controller
	I0916 14:12:45.082868       1 controller.go:170] Shutting down OpenAPI controller
	I0916 14:12:45.082879       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I0916 14:12:45.082894       1 autoregister_controller.go:168] Shutting down autoregister controller
	I0916 14:12:45.082913       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0916 14:12:45.082929       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I0916 14:12:45.082936       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I0916 14:12:45.082954       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0916 14:12:45.082963       1 local_available_controller.go:172] Shutting down LocalAvailability controller
	I0916 14:12:45.082975       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I0916 14:12:45.082982       1 apiservice_controller.go:134] Shutting down APIServiceRegistrationController
	I0916 14:12:45.082990       1 remote_available_controller.go:427] Shutting down RemoteAvailability controller
	I0916 14:12:45.083001       1 controller.go:132] Ending legacy_token_tracking_controller
	I0916 14:12:45.083006       1 controller.go:133] Shutting down legacy_token_tracking_controller
	
	
	==> kube-apiserver [b454d7bb255716c709ab1373da81c6a5f05e50514d5f91d32c0590f8413eba04] <==
	I0916 14:14:23.610434       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 14:14:23.612772       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 14:14:23.613512       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 14:14:23.619541       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 14:14:23.633340       1 aggregator.go:171] initial CRD sync complete...
	I0916 14:14:23.633370       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 14:14:23.633381       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 14:14:23.633386       1 cache.go:39] Caches are synced for autoregister controller
	I0916 14:14:23.637189       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 14:14:23.637316       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 14:14:23.637448       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 14:14:23.637588       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 14:14:23.643758       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 14:14:23.643819       1 policy_source.go:224] refreshing policies
	I0916 14:14:23.647653       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 14:14:23.648914       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 14:14:23.683910       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 14:14:24.486091       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 14:14:25.999977       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 14:14:26.103716       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 14:14:26.117481       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 14:14:26.181401       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 14:14:26.186977       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 14:14:26.982338       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 14:14:27.374563       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b4d468e417dd8afa14df3147175ab51461c530334533dceb411e6decb152c690] <==
	I0916 14:10:21.712832       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-561755-m02"
	I0916 14:10:21.715426       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-561755-m03\" does not exist"
	I0916 14:10:21.731680       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-561755-m03" podCIDRs=["10.244.3.0/24"]
	I0916 14:10:21.731730       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	E0916 14:10:21.753670       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-561755-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-561755-m03" podCIDRs=["10.244.4.0/24"]
	E0916 14:10:21.753734       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-561755-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-561755-m03"
	E0916 14:10:21.753836       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-561755-m03': failed to patch node CIDR: Node \"multinode-561755-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0916 14:10:21.753876       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:10:21.759129       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:10:22.086195       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:10:26.133784       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:10:32.140060       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:10:39.484922       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-561755-m02"
	I0916 14:10:39.485290       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:10:39.497923       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:10:41.117862       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:11:21.133424       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m02"
	I0916 14:11:21.134511       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-561755-m03"
	I0916 14:11:21.149901       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m02"
	I0916 14:11:21.179902       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.458482ms"
	I0916 14:11:21.180473       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.523µs"
	I0916 14:11:26.181857       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:11:26.196407       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:11:26.227101       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m02"
	I0916 14:11:36.308273       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	
	
	==> kube-controller-manager [c82f6eb6f5f32910d140da76d9260949ee3401895d9edebe51c819564f920427] <==
	I0916 14:15:23.450922       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m02"
	I0916 14:15:23.458661       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="88.392µs"
	I0916 14:15:23.473940       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="77.045µs"
	I0916 14:15:26.278571       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="6.966687ms"
	I0916 14:15:26.278858       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="146.98µs"
	I0916 14:15:27.120839       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m02"
	I0916 14:15:36.498431       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m02"
	I0916 14:15:41.029542       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:15:41.055308       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:15:41.280568       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:15:41.281419       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-561755-m02"
	I0916 14:15:42.178556       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-561755-m03\" does not exist"
	I0916 14:15:42.179849       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-561755-m02"
	I0916 14:15:42.189648       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-561755-m03" podCIDRs=["10.244.2.0/24"]
	I0916 14:15:42.189714       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:15:42.189936       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:15:42.196173       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:15:42.299561       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:15:42.661694       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:15:47.220678       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:15:52.351181       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:16:00.092957       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-561755-m02"
	I0916 14:16:00.093285       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:16:00.106886       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:16:02.142127       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	
	
	==> kube-proxy [6732202a9735ad240ad594daeba3c99acbd6041fb5330c5414718e5a2531b5eb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 14:14:25.598401       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 14:14:25.619833       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.163"]
	E0916 14:14:25.619925       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 14:14:25.741170       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 14:14:25.741537       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 14:14:25.741942       1 server_linux.go:169] "Using iptables Proxier"
	I0916 14:14:25.752459       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 14:14:25.753101       1 server.go:483] "Version info" version="v1.31.1"
	I0916 14:14:25.753131       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 14:14:25.754785       1 config.go:199] "Starting service config controller"
	I0916 14:14:25.754862       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 14:14:25.754917       1 config.go:105] "Starting endpoint slice config controller"
	I0916 14:14:25.754922       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 14:14:25.755859       1 config.go:328] "Starting node config controller"
	I0916 14:14:25.755893       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 14:14:25.856911       1 shared_informer.go:320] Caches are synced for service config
	I0916 14:14:25.857019       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 14:14:25.858366       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [9bbf062b56098221043af49349f3515a3514781797b5351608741e161512e0aa] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 14:07:52.580898       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 14:07:52.593571       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.163"]
	E0916 14:07:52.593690       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 14:07:52.742051       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 14:07:52.744316       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 14:07:52.744458       1 server_linux.go:169] "Using iptables Proxier"
	I0916 14:07:52.747007       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 14:07:52.747530       1 server.go:483] "Version info" version="v1.31.1"
	I0916 14:07:52.747592       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 14:07:52.750947       1 config.go:199] "Starting service config controller"
	I0916 14:07:52.751053       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 14:07:52.751529       1 config.go:105] "Starting endpoint slice config controller"
	I0916 14:07:52.751560       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 14:07:52.752751       1 config.go:328] "Starting node config controller"
	I0916 14:07:52.752784       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 14:07:52.852408       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 14:07:52.852469       1 shared_informer.go:320] Caches are synced for service config
	I0916 14:07:52.853015       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [32c48dc4407b5424dafcfc720fbc1d0b916236aadc82242cdc895ec6156be7f2] <==
	I0916 14:14:21.392964       1 serving.go:386] Generated self-signed cert in-memory
	I0916 14:14:23.678798       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 14:14:23.678845       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 14:14:23.686314       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0916 14:14:23.686375       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0916 14:14:23.686483       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 14:14:23.686510       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 14:14:23.686523       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0916 14:14:23.686531       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0916 14:14:23.687102       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 14:14:23.688583       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 14:14:23.786987       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0916 14:14:23.787065       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0916 14:14:23.787081       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ffe27a6ccf80fc83aa095c1981ef41d89878447fbeff8ce50858c52630c320ae] <==
	E0916 14:07:44.455825       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 14:07:44.455889       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 14:07:44.455940       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 14:07:44.455897       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 14:07:44.456063       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 14:07:44.456996       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 14:07:44.457040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 14:07:44.460296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 14:07:44.460332       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 14:07:45.408547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 14:07:45.408598       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 14:07:45.420776       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 14:07:45.420824       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 14:07:45.454597       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 14:07:45.454641       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 14:07:45.479006       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 14:07:45.479048       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 14:07:45.504890       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 14:07:45.504947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 14:07:45.565951       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 14:07:45.566072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 14:07:45.691526       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 14:07:45.691700       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 14:07:48.838202       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 14:12:45.059361       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 16 14:14:29 multinode-561755 kubelet[2933]: E0916 14:14:29.631754    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496069631042750,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:14:34 multinode-561755 kubelet[2933]: I0916 14:14:34.632818    2933 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 16 14:14:39 multinode-561755 kubelet[2933]: E0916 14:14:39.635368    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496079634019024,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:14:39 multinode-561755 kubelet[2933]: E0916 14:14:39.635586    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496079634019024,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:14:49 multinode-561755 kubelet[2933]: E0916 14:14:49.637761    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496089636557836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:14:49 multinode-561755 kubelet[2933]: E0916 14:14:49.637817    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496089636557836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:14:59 multinode-561755 kubelet[2933]: E0916 14:14:59.639744    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496099639083862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:14:59 multinode-561755 kubelet[2933]: E0916 14:14:59.639770    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496099639083862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:15:09 multinode-561755 kubelet[2933]: E0916 14:15:09.645468    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496109644785726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:15:09 multinode-561755 kubelet[2933]: E0916 14:15:09.645523    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496109644785726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:15:19 multinode-561755 kubelet[2933]: E0916 14:15:19.622367    2933 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 14:15:19 multinode-561755 kubelet[2933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 14:15:19 multinode-561755 kubelet[2933]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 14:15:19 multinode-561755 kubelet[2933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 14:15:19 multinode-561755 kubelet[2933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 14:15:19 multinode-561755 kubelet[2933]: E0916 14:15:19.648006    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496119647142621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:15:19 multinode-561755 kubelet[2933]: E0916 14:15:19.648085    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496119647142621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:15:29 multinode-561755 kubelet[2933]: E0916 14:15:29.649872    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496129649161454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:15:29 multinode-561755 kubelet[2933]: E0916 14:15:29.650991    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496129649161454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:15:39 multinode-561755 kubelet[2933]: E0916 14:15:39.652839    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496139652597032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:15:39 multinode-561755 kubelet[2933]: E0916 14:15:39.652860    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496139652597032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:15:49 multinode-561755 kubelet[2933]: E0916 14:15:49.655839    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496149654996980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:15:49 multinode-561755 kubelet[2933]: E0916 14:15:49.656184    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496149654996980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:15:59 multinode-561755 kubelet[2933]: E0916 14:15:59.658888    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496159658525375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:15:59 multinode-561755 kubelet[2933]: E0916 14:15:59.659203    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496159658525375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 14:16:02.577024  754460 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19652-713072/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-561755 -n multinode-561755
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-561755 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (322.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-561755 stop: exit status 82 (2m0.456879211s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-561755-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-561755 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-561755 status: exit status 3 (18.791022152s)

                                                
                                                
-- stdout --
	multinode-561755
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-561755-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 14:18:25.830108  755563 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.34:22: connect: no route to host
	E0916 14:18:25.830144  755563 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.34:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-561755 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-561755 -n multinode-561755
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-561755 logs -n 25: (1.454866672s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-561755 ssh -n                                                                 | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:09 UTC | 16 Sep 24 14:09 UTC |
	|         | multinode-561755-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-561755 cp multinode-561755-m02:/home/docker/cp-test.txt                       | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:09 UTC | 16 Sep 24 14:09 UTC |
	|         | multinode-561755:/home/docker/cp-test_multinode-561755-m02_multinode-561755.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-561755 ssh -n                                                                 | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:09 UTC | 16 Sep 24 14:09 UTC |
	|         | multinode-561755-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-561755 ssh -n multinode-561755 sudo cat                                       | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:09 UTC | 16 Sep 24 14:09 UTC |
	|         | /home/docker/cp-test_multinode-561755-m02_multinode-561755.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-561755 cp multinode-561755-m02:/home/docker/cp-test.txt                       | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:09 UTC | 16 Sep 24 14:09 UTC |
	|         | multinode-561755-m03:/home/docker/cp-test_multinode-561755-m02_multinode-561755-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-561755 ssh -n                                                                 | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:09 UTC | 16 Sep 24 14:09 UTC |
	|         | multinode-561755-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-561755 ssh -n multinode-561755-m03 sudo cat                                   | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:09 UTC | 16 Sep 24 14:10 UTC |
	|         | /home/docker/cp-test_multinode-561755-m02_multinode-561755-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-561755 cp testdata/cp-test.txt                                                | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | multinode-561755-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-561755 ssh -n                                                                 | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | multinode-561755-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-561755 cp multinode-561755-m03:/home/docker/cp-test.txt                       | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1710468598/001/cp-test_multinode-561755-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-561755 ssh -n                                                                 | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | multinode-561755-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-561755 cp multinode-561755-m03:/home/docker/cp-test.txt                       | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | multinode-561755:/home/docker/cp-test_multinode-561755-m03_multinode-561755.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-561755 ssh -n                                                                 | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | multinode-561755-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-561755 ssh -n multinode-561755 sudo cat                                       | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | /home/docker/cp-test_multinode-561755-m03_multinode-561755.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-561755 cp multinode-561755-m03:/home/docker/cp-test.txt                       | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | multinode-561755-m02:/home/docker/cp-test_multinode-561755-m03_multinode-561755-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-561755 ssh -n                                                                 | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | multinode-561755-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-561755 ssh -n multinode-561755-m02 sudo cat                                   | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | /home/docker/cp-test_multinode-561755-m03_multinode-561755-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-561755 node stop m03                                                          | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	| node    | multinode-561755 node start                                                             | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-561755                                                                | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC |                     |
	| stop    | -p multinode-561755                                                                     | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC |                     |
	| start   | -p multinode-561755                                                                     | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:12 UTC | 16 Sep 24 14:16 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-561755                                                                | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:16 UTC |                     |
	| node    | multinode-561755 node delete                                                            | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:16 UTC | 16 Sep 24 14:16 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-561755 stop                                                                   | multinode-561755 | jenkins | v1.34.0 | 16 Sep 24 14:16 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 14:12:44
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 14:12:44.162099  753338 out.go:345] Setting OutFile to fd 1 ...
	I0916 14:12:44.162247  753338 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 14:12:44.162258  753338 out.go:358] Setting ErrFile to fd 2...
	I0916 14:12:44.162264  753338 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 14:12:44.162438  753338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
	I0916 14:12:44.163012  753338 out.go:352] Setting JSON to false
	I0916 14:12:44.164014  753338 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":14113,"bootTime":1726481851,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 14:12:44.164119  753338 start.go:139] virtualization: kvm guest
	I0916 14:12:44.166711  753338 out.go:177] * [multinode-561755] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 14:12:44.168115  753338 out.go:177]   - MINIKUBE_LOCATION=19652
	I0916 14:12:44.168104  753338 notify.go:220] Checking for updates...
	I0916 14:12:44.170624  753338 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 14:12:44.171919  753338 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19652-713072/kubeconfig
	I0916 14:12:44.173303  753338 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 14:12:44.174801  753338 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 14:12:44.176199  753338 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 14:12:44.177727  753338 config.go:182] Loaded profile config "multinode-561755": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 14:12:44.177841  753338 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 14:12:44.178493  753338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 14:12:44.178541  753338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 14:12:44.194766  753338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38723
	I0916 14:12:44.195320  753338 main.go:141] libmachine: () Calling .GetVersion
	I0916 14:12:44.195969  753338 main.go:141] libmachine: Using API Version  1
	I0916 14:12:44.195990  753338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 14:12:44.196349  753338 main.go:141] libmachine: () Calling .GetMachineName
	I0916 14:12:44.196550  753338 main.go:141] libmachine: (multinode-561755) Calling .DriverName
	I0916 14:12:44.233058  753338 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 14:12:44.234212  753338 start.go:297] selected driver: kvm2
	I0916 14:12:44.234229  753338 start.go:901] validating driver "kvm2" against &{Name:multinode-561755 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-561755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.34 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.132 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 14:12:44.234350  753338 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 14:12:44.234707  753338 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 14:12:44.234783  753338 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19652-713072/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 14:12:44.249588  753338 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 14:12:44.250252  753338 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 14:12:44.250287  753338 cni.go:84] Creating CNI manager for ""
	I0916 14:12:44.250351  753338 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0916 14:12:44.250412  753338 start.go:340] cluster config:
	{Name:multinode-561755 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-561755 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.34 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.132 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 14:12:44.250546  753338 iso.go:125] acquiring lock: {Name:mk66d96ffbd424a8ca76a8604dfbe200d58305de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 14:12:44.252843  753338 out.go:177] * Starting "multinode-561755" primary control-plane node in "multinode-561755" cluster
	I0916 14:12:44.254031  753338 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 14:12:44.254072  753338 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 14:12:44.254081  753338 cache.go:56] Caching tarball of preloaded images
	I0916 14:12:44.254152  753338 preload.go:172] Found /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 14:12:44.254162  753338 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 14:12:44.254271  753338 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/multinode-561755/config.json ...
	I0916 14:12:44.254489  753338 start.go:360] acquireMachinesLock for multinode-561755: {Name:mke8f8f8ba61009cdea7a3d88b50b9f6ae6e1362 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 14:12:44.254527  753338 start.go:364] duration metric: took 21.927µs to acquireMachinesLock for "multinode-561755"
	I0916 14:12:44.254541  753338 start.go:96] Skipping create...Using existing machine configuration
	I0916 14:12:44.254546  753338 fix.go:54] fixHost starting: 
	I0916 14:12:44.254790  753338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 14:12:44.254828  753338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 14:12:44.268740  753338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38577
	I0916 14:12:44.269193  753338 main.go:141] libmachine: () Calling .GetVersion
	I0916 14:12:44.269715  753338 main.go:141] libmachine: Using API Version  1
	I0916 14:12:44.269743  753338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 14:12:44.270044  753338 main.go:141] libmachine: () Calling .GetMachineName
	I0916 14:12:44.270217  753338 main.go:141] libmachine: (multinode-561755) Calling .DriverName
	I0916 14:12:44.270333  753338 main.go:141] libmachine: (multinode-561755) Calling .GetState
	I0916 14:12:44.271638  753338 fix.go:112] recreateIfNeeded on multinode-561755: state=Running err=<nil>
	W0916 14:12:44.271669  753338 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 14:12:44.273466  753338 out.go:177] * Updating the running kvm2 "multinode-561755" VM ...
	I0916 14:12:44.274606  753338 machine.go:93] provisionDockerMachine start ...
	I0916 14:12:44.274632  753338 main.go:141] libmachine: (multinode-561755) Calling .DriverName
	I0916 14:12:44.274806  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHHostname
	I0916 14:12:44.277182  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.277649  753338 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:12:44.277720  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.277784  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHPort
	I0916 14:12:44.277945  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:12:44.278099  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:12:44.278208  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHUsername
	I0916 14:12:44.278349  753338 main.go:141] libmachine: Using SSH client type: native
	I0916 14:12:44.278558  753338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0916 14:12:44.278573  753338 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 14:12:44.394681  753338 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-561755
	
	I0916 14:12:44.394707  753338 main.go:141] libmachine: (multinode-561755) Calling .GetMachineName
	I0916 14:12:44.394992  753338 buildroot.go:166] provisioning hostname "multinode-561755"
	I0916 14:12:44.395026  753338 main.go:141] libmachine: (multinode-561755) Calling .GetMachineName
	I0916 14:12:44.395219  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHHostname
	I0916 14:12:44.397689  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.398075  753338 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:12:44.398102  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.398256  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHPort
	I0916 14:12:44.398426  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:12:44.398578  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:12:44.398698  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHUsername
	I0916 14:12:44.398844  753338 main.go:141] libmachine: Using SSH client type: native
	I0916 14:12:44.399023  753338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0916 14:12:44.399040  753338 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-561755 && echo "multinode-561755" | sudo tee /etc/hostname
	I0916 14:12:44.529742  753338 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-561755
	
	I0916 14:12:44.529772  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHHostname
	I0916 14:12:44.532199  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.532633  753338 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:12:44.532666  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.532794  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHPort
	I0916 14:12:44.532987  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:12:44.533129  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:12:44.533279  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHUsername
	I0916 14:12:44.533422  753338 main.go:141] libmachine: Using SSH client type: native
	I0916 14:12:44.533593  753338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0916 14:12:44.533608  753338 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-561755' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-561755/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-561755' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 14:12:44.646883  753338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 14:12:44.646940  753338 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19652-713072/.minikube CaCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19652-713072/.minikube}
	I0916 14:12:44.646983  753338 buildroot.go:174] setting up certificates
	I0916 14:12:44.647001  753338 provision.go:84] configureAuth start
	I0916 14:12:44.647018  753338 main.go:141] libmachine: (multinode-561755) Calling .GetMachineName
	I0916 14:12:44.647320  753338 main.go:141] libmachine: (multinode-561755) Calling .GetIP
	I0916 14:12:44.650086  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.650383  753338 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:12:44.650403  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.650587  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHHostname
	I0916 14:12:44.652528  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.652803  753338 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:12:44.652833  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.652942  753338 provision.go:143] copyHostCerts
	I0916 14:12:44.652985  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem
	I0916 14:12:44.653037  753338 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem, removing ...
	I0916 14:12:44.653050  753338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem
	I0916 14:12:44.653128  753338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem (1679 bytes)
	I0916 14:12:44.653245  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem
	I0916 14:12:44.653271  753338 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem, removing ...
	I0916 14:12:44.653279  753338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem
	I0916 14:12:44.653317  753338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem (1082 bytes)
	I0916 14:12:44.653463  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem
	I0916 14:12:44.653490  753338 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem, removing ...
	I0916 14:12:44.653500  753338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem
	I0916 14:12:44.653538  753338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem (1123 bytes)
	I0916 14:12:44.653656  753338 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem org=jenkins.multinode-561755 san=[127.0.0.1 192.168.39.163 localhost minikube multinode-561755]
	I0916 14:12:44.768791  753338 provision.go:177] copyRemoteCerts
	I0916 14:12:44.768870  753338 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 14:12:44.768898  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHHostname
	I0916 14:12:44.771831  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.772265  753338 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:12:44.772307  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.772484  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHPort
	I0916 14:12:44.772694  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:12:44.772851  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHUsername
	I0916 14:12:44.772972  753338 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/multinode-561755/id_rsa Username:docker}
	I0916 14:12:44.859977  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 14:12:44.860064  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 14:12:44.885259  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 14:12:44.885358  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0916 14:12:44.909952  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 14:12:44.910021  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 14:12:44.933833  753338 provision.go:87] duration metric: took 286.813153ms to configureAuth
	I0916 14:12:44.933869  753338 buildroot.go:189] setting minikube options for container-runtime
	I0916 14:12:44.934307  753338 config.go:182] Loaded profile config "multinode-561755": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 14:12:44.934408  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHHostname
	I0916 14:12:44.937271  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.937663  753338 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:12:44.937704  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:12:44.937958  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHPort
	I0916 14:12:44.938171  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:12:44.938335  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:12:44.938473  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHUsername
	I0916 14:12:44.938624  753338 main.go:141] libmachine: Using SSH client type: native
	I0916 14:12:44.938834  753338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0916 14:12:44.938855  753338 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 14:14:15.585702  753338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 14:14:15.585752  753338 machine.go:96] duration metric: took 1m31.311122005s to provisionDockerMachine
	I0916 14:14:15.585768  753338 start.go:293] postStartSetup for "multinode-561755" (driver="kvm2")
	I0916 14:14:15.585822  753338 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 14:14:15.585849  753338 main.go:141] libmachine: (multinode-561755) Calling .DriverName
	I0916 14:14:15.586254  753338 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 14:14:15.586285  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHHostname
	I0916 14:14:15.589701  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:14:15.590099  753338 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:14:15.590120  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:14:15.590310  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHPort
	I0916 14:14:15.590504  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:14:15.590684  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHUsername
	I0916 14:14:15.590844  753338 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/multinode-561755/id_rsa Username:docker}
	I0916 14:14:15.684393  753338 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 14:14:15.689309  753338 command_runner.go:130] > NAME=Buildroot
	I0916 14:14:15.689331  753338 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0916 14:14:15.689343  753338 command_runner.go:130] > ID=buildroot
	I0916 14:14:15.689350  753338 command_runner.go:130] > VERSION_ID=2023.02.9
	I0916 14:14:15.689357  753338 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0916 14:14:15.689394  753338 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 14:14:15.689407  753338 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/addons for local assets ...
	I0916 14:14:15.689461  753338 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/files for local assets ...
	I0916 14:14:15.689544  753338 filesync.go:149] local asset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> 7205442.pem in /etc/ssl/certs
	I0916 14:14:15.689556  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> /etc/ssl/certs/7205442.pem
	I0916 14:14:15.689690  753338 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 14:14:15.699431  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 14:14:15.722108  753338 start.go:296] duration metric: took 136.328874ms for postStartSetup
	I0916 14:14:15.722140  753338 fix.go:56] duration metric: took 1m31.46759514s for fixHost
	I0916 14:14:15.722163  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHHostname
	I0916 14:14:15.724890  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:14:15.725262  753338 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:14:15.725285  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:14:15.725438  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHPort
	I0916 14:14:15.725637  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:14:15.725810  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:14:15.725944  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHUsername
	I0916 14:14:15.726081  753338 main.go:141] libmachine: Using SSH client type: native
	I0916 14:14:15.726244  753338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0916 14:14:15.726254  753338 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 14:14:15.837817  753338 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726496055.811109899
	
	I0916 14:14:15.837839  753338 fix.go:216] guest clock: 1726496055.811109899
	I0916 14:14:15.837846  753338 fix.go:229] Guest: 2024-09-16 14:14:15.811109899 +0000 UTC Remote: 2024-09-16 14:14:15.72214485 +0000 UTC m=+91.595923156 (delta=88.965049ms)
	I0916 14:14:15.837882  753338 fix.go:200] guest clock delta is within tolerance: 88.965049ms
	I0916 14:14:15.837887  753338 start.go:83] releasing machines lock for "multinode-561755", held for 1m31.583351981s
	I0916 14:14:15.837907  753338 main.go:141] libmachine: (multinode-561755) Calling .DriverName
	I0916 14:14:15.838173  753338 main.go:141] libmachine: (multinode-561755) Calling .GetIP
	I0916 14:14:15.840747  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:14:15.841103  753338 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:14:15.841124  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:14:15.841297  753338 main.go:141] libmachine: (multinode-561755) Calling .DriverName
	I0916 14:14:15.841815  753338 main.go:141] libmachine: (multinode-561755) Calling .DriverName
	I0916 14:14:15.841988  753338 main.go:141] libmachine: (multinode-561755) Calling .DriverName
	I0916 14:14:15.842107  753338 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 14:14:15.842153  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHHostname
	I0916 14:14:15.842237  753338 ssh_runner.go:195] Run: cat /version.json
	I0916 14:14:15.842263  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHHostname
	I0916 14:14:15.844633  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:14:15.844951  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:14:15.844982  753338 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:14:15.845005  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:14:15.845128  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHPort
	I0916 14:14:15.845295  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:14:15.845447  753338 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:14:15.845455  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHUsername
	I0916 14:14:15.845474  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:14:15.845649  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHPort
	I0916 14:14:15.845646  753338 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/multinode-561755/id_rsa Username:docker}
	I0916 14:14:15.845824  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:14:15.845960  753338 main.go:141] libmachine: (multinode-561755) Calling .GetSSHUsername
	I0916 14:14:15.846086  753338 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/multinode-561755/id_rsa Username:docker}
	I0916 14:14:15.945840  753338 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 14:14:15.945891  753338 command_runner.go:130] > {"iso_version": "v1.34.0-1726415472-19646", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "7dc55c0008a982396eb57879cd4eab23ab96531e"}
	I0916 14:14:15.946031  753338 ssh_runner.go:195] Run: systemctl --version
	I0916 14:14:15.951565  753338 command_runner.go:130] > systemd 252 (252)
	I0916 14:14:15.951615  753338 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0916 14:14:15.951686  753338 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 14:14:16.106670  753338 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 14:14:16.112488  753338 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0916 14:14:16.112535  753338 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 14:14:16.112596  753338 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 14:14:16.121447  753338 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 14:14:16.121466  753338 start.go:495] detecting cgroup driver to use...
	I0916 14:14:16.121517  753338 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 14:14:16.136907  753338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 14:14:16.149990  753338 docker.go:217] disabling cri-docker service (if available) ...
	I0916 14:14:16.150023  753338 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 14:14:16.162604  753338 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 14:14:16.175139  753338 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 14:14:16.309311  753338 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 14:14:16.440113  753338 docker.go:233] disabling docker service ...
	I0916 14:14:16.440186  753338 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 14:14:16.455226  753338 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 14:14:16.468719  753338 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 14:14:16.599755  753338 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 14:14:16.739840  753338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 14:14:16.754847  753338 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 14:14:16.773075  753338 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0916 14:14:16.773127  753338 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 14:14:16.773184  753338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:14:16.783578  753338 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 14:14:16.783651  753338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:14:16.793513  753338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:14:16.803156  753338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:14:16.813484  753338 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 14:14:16.823599  753338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:14:16.833179  753338 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:14:16.843955  753338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:14:16.853607  753338 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 14:14:16.862390  753338 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 14:14:16.862446  753338 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 14:14:16.871187  753338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 14:14:17.002438  753338 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 14:14:17.185411  753338 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 14:14:17.185478  753338 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 14:14:17.190166  753338 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0916 14:14:17.190186  753338 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 14:14:17.190192  753338 command_runner.go:130] > Device: 0,22	Inode: 1328        Links: 1
	I0916 14:14:17.190199  753338 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 14:14:17.190203  753338 command_runner.go:130] > Access: 2024-09-16 14:14:17.066932209 +0000
	I0916 14:14:17.190209  753338 command_runner.go:130] > Modify: 2024-09-16 14:14:17.066932209 +0000
	I0916 14:14:17.190214  753338 command_runner.go:130] > Change: 2024-09-16 14:14:17.066932209 +0000
	I0916 14:14:17.190218  753338 command_runner.go:130] >  Birth: -
	I0916 14:14:17.190257  753338 start.go:563] Will wait 60s for crictl version
	I0916 14:14:17.190327  753338 ssh_runner.go:195] Run: which crictl
	I0916 14:14:17.194057  753338 command_runner.go:130] > /usr/bin/crictl
	I0916 14:14:17.194120  753338 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 14:14:17.233722  753338 command_runner.go:130] > Version:  0.1.0
	I0916 14:14:17.233743  753338 command_runner.go:130] > RuntimeName:  cri-o
	I0916 14:14:17.233748  753338 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0916 14:14:17.233753  753338 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 14:14:17.233957  753338 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 14:14:17.234039  753338 ssh_runner.go:195] Run: crio --version
	I0916 14:14:17.264508  753338 command_runner.go:130] > crio version 1.29.1
	I0916 14:14:17.264525  753338 command_runner.go:130] > Version:        1.29.1
	I0916 14:14:17.264532  753338 command_runner.go:130] > GitCommit:      unknown
	I0916 14:14:17.264539  753338 command_runner.go:130] > GitCommitDate:  unknown
	I0916 14:14:17.264545  753338 command_runner.go:130] > GitTreeState:   clean
	I0916 14:14:17.264558  753338 command_runner.go:130] > BuildDate:      2024-09-15T21:21:56Z
	I0916 14:14:17.264565  753338 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 14:14:17.264572  753338 command_runner.go:130] > Compiler:       gc
	I0916 14:14:17.264578  753338 command_runner.go:130] > Platform:       linux/amd64
	I0916 14:14:17.264584  753338 command_runner.go:130] > Linkmode:       dynamic
	I0916 14:14:17.264592  753338 command_runner.go:130] > BuildTags:      
	I0916 14:14:17.264597  753338 command_runner.go:130] >   containers_image_ostree_stub
	I0916 14:14:17.264604  753338 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 14:14:17.264613  753338 command_runner.go:130] >   btrfs_noversion
	I0916 14:14:17.264620  753338 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 14:14:17.264626  753338 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 14:14:17.264636  753338 command_runner.go:130] >   seccomp
	I0916 14:14:17.264648  753338 command_runner.go:130] > LDFlags:          unknown
	I0916 14:14:17.264654  753338 command_runner.go:130] > SeccompEnabled:   true
	I0916 14:14:17.264659  753338 command_runner.go:130] > AppArmorEnabled:  false
	I0916 14:14:17.264770  753338 ssh_runner.go:195] Run: crio --version
	I0916 14:14:17.291130  753338 command_runner.go:130] > crio version 1.29.1
	I0916 14:14:17.291153  753338 command_runner.go:130] > Version:        1.29.1
	I0916 14:14:17.291162  753338 command_runner.go:130] > GitCommit:      unknown
	I0916 14:14:17.291169  753338 command_runner.go:130] > GitCommitDate:  unknown
	I0916 14:14:17.291175  753338 command_runner.go:130] > GitTreeState:   clean
	I0916 14:14:17.291189  753338 command_runner.go:130] > BuildDate:      2024-09-15T21:21:56Z
	I0916 14:14:17.291197  753338 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 14:14:17.291206  753338 command_runner.go:130] > Compiler:       gc
	I0916 14:14:17.291213  753338 command_runner.go:130] > Platform:       linux/amd64
	I0916 14:14:17.291223  753338 command_runner.go:130] > Linkmode:       dynamic
	I0916 14:14:17.291233  753338 command_runner.go:130] > BuildTags:      
	I0916 14:14:17.291241  753338 command_runner.go:130] >   containers_image_ostree_stub
	I0916 14:14:17.291251  753338 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 14:14:17.291260  753338 command_runner.go:130] >   btrfs_noversion
	I0916 14:14:17.291269  753338 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 14:14:17.291278  753338 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 14:14:17.291283  753338 command_runner.go:130] >   seccomp
	I0916 14:14:17.291292  753338 command_runner.go:130] > LDFlags:          unknown
	I0916 14:14:17.291301  753338 command_runner.go:130] > SeccompEnabled:   true
	I0916 14:14:17.291311  753338 command_runner.go:130] > AppArmorEnabled:  false
	I0916 14:14:17.294866  753338 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 14:14:17.296070  753338 main.go:141] libmachine: (multinode-561755) Calling .GetIP
	I0916 14:14:17.298681  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:14:17.298993  753338 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:14:17.299024  753338 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:14:17.299203  753338 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 14:14:17.303474  753338 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0916 14:14:17.303585  753338 kubeadm.go:883] updating cluster {Name:multinode-561755 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-561755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.34 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.132 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 14:14:17.303763  753338 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 14:14:17.303816  753338 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 14:14:17.345252  753338 command_runner.go:130] > {
	I0916 14:14:17.345270  753338 command_runner.go:130] >   "images": [
	I0916 14:14:17.345274  753338 command_runner.go:130] >     {
	I0916 14:14:17.345281  753338 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 14:14:17.345287  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.345296  753338 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 14:14:17.345302  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345308  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.345323  753338 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 14:14:17.345338  753338 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 14:14:17.345343  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345348  753338 command_runner.go:130] >       "size": "87190579",
	I0916 14:14:17.345355  753338 command_runner.go:130] >       "uid": null,
	I0916 14:14:17.345358  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.345364  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.345370  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.345374  753338 command_runner.go:130] >     },
	I0916 14:14:17.345378  753338 command_runner.go:130] >     {
	I0916 14:14:17.345384  753338 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0916 14:14:17.345391  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.345396  753338 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0916 14:14:17.345401  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345405  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.345412  753338 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0916 14:14:17.345421  753338 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0916 14:14:17.345424  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345428  753338 command_runner.go:130] >       "size": "1363676",
	I0916 14:14:17.345432  753338 command_runner.go:130] >       "uid": null,
	I0916 14:14:17.345441  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.345449  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.345453  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.345458  753338 command_runner.go:130] >     },
	I0916 14:14:17.345461  753338 command_runner.go:130] >     {
	I0916 14:14:17.345469  753338 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 14:14:17.345473  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.345478  753338 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 14:14:17.345484  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345488  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.345497  753338 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 14:14:17.345507  753338 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 14:14:17.345511  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345515  753338 command_runner.go:130] >       "size": "31470524",
	I0916 14:14:17.345521  753338 command_runner.go:130] >       "uid": null,
	I0916 14:14:17.345525  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.345531  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.345535  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.345538  753338 command_runner.go:130] >     },
	I0916 14:14:17.345541  753338 command_runner.go:130] >     {
	I0916 14:14:17.345547  753338 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 14:14:17.345560  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.345567  753338 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 14:14:17.345570  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345574  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.345583  753338 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 14:14:17.345595  753338 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 14:14:17.345601  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345604  753338 command_runner.go:130] >       "size": "63273227",
	I0916 14:14:17.345609  753338 command_runner.go:130] >       "uid": null,
	I0916 14:14:17.345612  753338 command_runner.go:130] >       "username": "nonroot",
	I0916 14:14:17.345621  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.345627  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.345632  753338 command_runner.go:130] >     },
	I0916 14:14:17.345638  753338 command_runner.go:130] >     {
	I0916 14:14:17.345648  753338 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 14:14:17.345657  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.345665  753338 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 14:14:17.345690  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345697  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.345710  753338 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 14:14:17.345724  753338 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 14:14:17.345732  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345738  753338 command_runner.go:130] >       "size": "149009664",
	I0916 14:14:17.345746  753338 command_runner.go:130] >       "uid": {
	I0916 14:14:17.345752  753338 command_runner.go:130] >         "value": "0"
	I0916 14:14:17.345760  753338 command_runner.go:130] >       },
	I0916 14:14:17.345766  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.345775  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.345781  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.345789  753338 command_runner.go:130] >     },
	I0916 14:14:17.345794  753338 command_runner.go:130] >     {
	I0916 14:14:17.345803  753338 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 14:14:17.345811  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.345819  753338 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 14:14:17.345827  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345833  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.345847  753338 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 14:14:17.345859  753338 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 14:14:17.345865  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345869  753338 command_runner.go:130] >       "size": "95237600",
	I0916 14:14:17.345875  753338 command_runner.go:130] >       "uid": {
	I0916 14:14:17.345878  753338 command_runner.go:130] >         "value": "0"
	I0916 14:14:17.345882  753338 command_runner.go:130] >       },
	I0916 14:14:17.345886  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.345892  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.345896  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.345904  753338 command_runner.go:130] >     },
	I0916 14:14:17.345908  753338 command_runner.go:130] >     {
	I0916 14:14:17.345915  753338 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 14:14:17.345921  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.345927  753338 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 14:14:17.345932  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345936  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.345944  753338 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 14:14:17.345953  753338 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 14:14:17.345957  753338 command_runner.go:130] >       ],
	I0916 14:14:17.345961  753338 command_runner.go:130] >       "size": "89437508",
	I0916 14:14:17.345967  753338 command_runner.go:130] >       "uid": {
	I0916 14:14:17.345971  753338 command_runner.go:130] >         "value": "0"
	I0916 14:14:17.345974  753338 command_runner.go:130] >       },
	I0916 14:14:17.345979  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.345985  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.345989  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.345992  753338 command_runner.go:130] >     },
	I0916 14:14:17.345995  753338 command_runner.go:130] >     {
	I0916 14:14:17.346001  753338 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 14:14:17.346007  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.346012  753338 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 14:14:17.346017  753338 command_runner.go:130] >       ],
	I0916 14:14:17.346021  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.346038  753338 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 14:14:17.346047  753338 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 14:14:17.346052  753338 command_runner.go:130] >       ],
	I0916 14:14:17.346056  753338 command_runner.go:130] >       "size": "92733849",
	I0916 14:14:17.346062  753338 command_runner.go:130] >       "uid": null,
	I0916 14:14:17.346066  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.346070  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.346076  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.346079  753338 command_runner.go:130] >     },
	I0916 14:14:17.346083  753338 command_runner.go:130] >     {
	I0916 14:14:17.346089  753338 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 14:14:17.346092  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.346097  753338 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 14:14:17.346100  753338 command_runner.go:130] >       ],
	I0916 14:14:17.346104  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.346111  753338 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 14:14:17.346117  753338 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 14:14:17.346121  753338 command_runner.go:130] >       ],
	I0916 14:14:17.346125  753338 command_runner.go:130] >       "size": "68420934",
	I0916 14:14:17.346128  753338 command_runner.go:130] >       "uid": {
	I0916 14:14:17.346132  753338 command_runner.go:130] >         "value": "0"
	I0916 14:14:17.346135  753338 command_runner.go:130] >       },
	I0916 14:14:17.346138  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.346142  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.346145  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.346148  753338 command_runner.go:130] >     },
	I0916 14:14:17.346151  753338 command_runner.go:130] >     {
	I0916 14:14:17.346156  753338 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 14:14:17.346159  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.346163  753338 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 14:14:17.346166  753338 command_runner.go:130] >       ],
	I0916 14:14:17.346170  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.346177  753338 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 14:14:17.346183  753338 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 14:14:17.346186  753338 command_runner.go:130] >       ],
	I0916 14:14:17.346189  753338 command_runner.go:130] >       "size": "742080",
	I0916 14:14:17.346193  753338 command_runner.go:130] >       "uid": {
	I0916 14:14:17.346196  753338 command_runner.go:130] >         "value": "65535"
	I0916 14:14:17.346199  753338 command_runner.go:130] >       },
	I0916 14:14:17.346203  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.346207  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.346210  753338 command_runner.go:130] >       "pinned": true
	I0916 14:14:17.346214  753338 command_runner.go:130] >     }
	I0916 14:14:17.346217  753338 command_runner.go:130] >   ]
	I0916 14:14:17.346220  753338 command_runner.go:130] > }
	I0916 14:14:17.346770  753338 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 14:14:17.346788  753338 crio.go:433] Images already preloaded, skipping extraction
	I0916 14:14:17.346843  753338 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 14:14:17.377588  753338 command_runner.go:130] > {
	I0916 14:14:17.377611  753338 command_runner.go:130] >   "images": [
	I0916 14:14:17.377615  753338 command_runner.go:130] >     {
	I0916 14:14:17.377623  753338 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 14:14:17.377628  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.377634  753338 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 14:14:17.377638  753338 command_runner.go:130] >       ],
	I0916 14:14:17.377642  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.377650  753338 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 14:14:17.377657  753338 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 14:14:17.377661  753338 command_runner.go:130] >       ],
	I0916 14:14:17.377677  753338 command_runner.go:130] >       "size": "87190579",
	I0916 14:14:17.377681  753338 command_runner.go:130] >       "uid": null,
	I0916 14:14:17.377707  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.377725  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.377730  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.377734  753338 command_runner.go:130] >     },
	I0916 14:14:17.377737  753338 command_runner.go:130] >     {
	I0916 14:14:17.377743  753338 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0916 14:14:17.377749  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.377756  753338 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0916 14:14:17.377762  753338 command_runner.go:130] >       ],
	I0916 14:14:17.377766  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.377775  753338 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0916 14:14:17.377782  753338 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0916 14:14:17.377787  753338 command_runner.go:130] >       ],
	I0916 14:14:17.377792  753338 command_runner.go:130] >       "size": "1363676",
	I0916 14:14:17.377796  753338 command_runner.go:130] >       "uid": null,
	I0916 14:14:17.377805  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.377809  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.377813  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.377818  753338 command_runner.go:130] >     },
	I0916 14:14:17.377822  753338 command_runner.go:130] >     {
	I0916 14:14:17.377828  753338 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 14:14:17.377833  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.377838  753338 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 14:14:17.377841  753338 command_runner.go:130] >       ],
	I0916 14:14:17.377848  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.377855  753338 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 14:14:17.377863  753338 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 14:14:17.377866  753338 command_runner.go:130] >       ],
	I0916 14:14:17.377870  753338 command_runner.go:130] >       "size": "31470524",
	I0916 14:14:17.377874  753338 command_runner.go:130] >       "uid": null,
	I0916 14:14:17.377878  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.377882  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.377886  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.377890  753338 command_runner.go:130] >     },
	I0916 14:14:17.377893  753338 command_runner.go:130] >     {
	I0916 14:14:17.377899  753338 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 14:14:17.377904  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.377909  753338 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 14:14:17.377912  753338 command_runner.go:130] >       ],
	I0916 14:14:17.377916  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.377923  753338 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 14:14:17.377934  753338 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 14:14:17.377938  753338 command_runner.go:130] >       ],
	I0916 14:14:17.377943  753338 command_runner.go:130] >       "size": "63273227",
	I0916 14:14:17.377947  753338 command_runner.go:130] >       "uid": null,
	I0916 14:14:17.377955  753338 command_runner.go:130] >       "username": "nonroot",
	I0916 14:14:17.377960  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.377964  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.377969  753338 command_runner.go:130] >     },
	I0916 14:14:17.377972  753338 command_runner.go:130] >     {
	I0916 14:14:17.377978  753338 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 14:14:17.377981  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.377986  753338 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 14:14:17.377989  753338 command_runner.go:130] >       ],
	I0916 14:14:17.377993  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.377999  753338 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 14:14:17.378007  753338 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 14:14:17.378010  753338 command_runner.go:130] >       ],
	I0916 14:14:17.378014  753338 command_runner.go:130] >       "size": "149009664",
	I0916 14:14:17.378019  753338 command_runner.go:130] >       "uid": {
	I0916 14:14:17.378022  753338 command_runner.go:130] >         "value": "0"
	I0916 14:14:17.378025  753338 command_runner.go:130] >       },
	I0916 14:14:17.378029  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.378034  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.378037  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.378041  753338 command_runner.go:130] >     },
	I0916 14:14:17.378044  753338 command_runner.go:130] >     {
	I0916 14:14:17.378050  753338 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 14:14:17.378055  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.378060  753338 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 14:14:17.378063  753338 command_runner.go:130] >       ],
	I0916 14:14:17.378068  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.378075  753338 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 14:14:17.378082  753338 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 14:14:17.378086  753338 command_runner.go:130] >       ],
	I0916 14:14:17.378090  753338 command_runner.go:130] >       "size": "95237600",
	I0916 14:14:17.378094  753338 command_runner.go:130] >       "uid": {
	I0916 14:14:17.378098  753338 command_runner.go:130] >         "value": "0"
	I0916 14:14:17.378104  753338 command_runner.go:130] >       },
	I0916 14:14:17.378108  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.378112  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.378116  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.378119  753338 command_runner.go:130] >     },
	I0916 14:14:17.378122  753338 command_runner.go:130] >     {
	I0916 14:14:17.378130  753338 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 14:14:17.378134  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.378139  753338 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 14:14:17.378142  753338 command_runner.go:130] >       ],
	I0916 14:14:17.378146  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.378154  753338 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 14:14:17.378163  753338 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 14:14:17.378169  753338 command_runner.go:130] >       ],
	I0916 14:14:17.378175  753338 command_runner.go:130] >       "size": "89437508",
	I0916 14:14:17.378179  753338 command_runner.go:130] >       "uid": {
	I0916 14:14:17.378185  753338 command_runner.go:130] >         "value": "0"
	I0916 14:14:17.378188  753338 command_runner.go:130] >       },
	I0916 14:14:17.378192  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.378199  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.378202  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.378206  753338 command_runner.go:130] >     },
	I0916 14:14:17.378211  753338 command_runner.go:130] >     {
	I0916 14:14:17.378217  753338 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 14:14:17.378222  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.378227  753338 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 14:14:17.378230  753338 command_runner.go:130] >       ],
	I0916 14:14:17.378234  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.378248  753338 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 14:14:17.378255  753338 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 14:14:17.378258  753338 command_runner.go:130] >       ],
	I0916 14:14:17.378263  753338 command_runner.go:130] >       "size": "92733849",
	I0916 14:14:17.378266  753338 command_runner.go:130] >       "uid": null,
	I0916 14:14:17.378270  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.378274  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.378278  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.378281  753338 command_runner.go:130] >     },
	I0916 14:14:17.378284  753338 command_runner.go:130] >     {
	I0916 14:14:17.378289  753338 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 14:14:17.378293  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.378298  753338 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 14:14:17.378301  753338 command_runner.go:130] >       ],
	I0916 14:14:17.378307  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.378314  753338 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 14:14:17.378321  753338 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 14:14:17.378324  753338 command_runner.go:130] >       ],
	I0916 14:14:17.378329  753338 command_runner.go:130] >       "size": "68420934",
	I0916 14:14:17.378332  753338 command_runner.go:130] >       "uid": {
	I0916 14:14:17.378336  753338 command_runner.go:130] >         "value": "0"
	I0916 14:14:17.378340  753338 command_runner.go:130] >       },
	I0916 14:14:17.378343  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.378347  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.378351  753338 command_runner.go:130] >       "pinned": false
	I0916 14:14:17.378355  753338 command_runner.go:130] >     },
	I0916 14:14:17.378358  753338 command_runner.go:130] >     {
	I0916 14:14:17.378364  753338 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 14:14:17.378369  753338 command_runner.go:130] >       "repoTags": [
	I0916 14:14:17.378374  753338 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 14:14:17.378381  753338 command_runner.go:130] >       ],
	I0916 14:14:17.378385  753338 command_runner.go:130] >       "repoDigests": [
	I0916 14:14:17.378392  753338 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 14:14:17.378404  753338 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 14:14:17.378407  753338 command_runner.go:130] >       ],
	I0916 14:14:17.378411  753338 command_runner.go:130] >       "size": "742080",
	I0916 14:14:17.378416  753338 command_runner.go:130] >       "uid": {
	I0916 14:14:17.378420  753338 command_runner.go:130] >         "value": "65535"
	I0916 14:14:17.378423  753338 command_runner.go:130] >       },
	I0916 14:14:17.378427  753338 command_runner.go:130] >       "username": "",
	I0916 14:14:17.378431  753338 command_runner.go:130] >       "spec": null,
	I0916 14:14:17.378435  753338 command_runner.go:130] >       "pinned": true
	I0916 14:14:17.378441  753338 command_runner.go:130] >     }
	I0916 14:14:17.378444  753338 command_runner.go:130] >   ]
	I0916 14:14:17.378448  753338 command_runner.go:130] > }
	I0916 14:14:17.378773  753338 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 14:14:17.378795  753338 cache_images.go:84] Images are preloaded, skipping loading
	I0916 14:14:17.378808  753338 kubeadm.go:934] updating node { 192.168.39.163 8443 v1.31.1 crio true true} ...
	I0916 14:14:17.378959  753338 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-561755 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-561755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 14:14:17.379047  753338 ssh_runner.go:195] Run: crio config
	I0916 14:14:17.420896  753338 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0916 14:14:17.420930  753338 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0916 14:14:17.420940  753338 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0916 14:14:17.420946  753338 command_runner.go:130] > #
	I0916 14:14:17.420957  753338 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0916 14:14:17.420966  753338 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0916 14:14:17.420976  753338 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0916 14:14:17.420987  753338 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0916 14:14:17.420994  753338 command_runner.go:130] > # reload'.
	I0916 14:14:17.421004  753338 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0916 14:14:17.421019  753338 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0916 14:14:17.421031  753338 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0916 14:14:17.421042  753338 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0916 14:14:17.421053  753338 command_runner.go:130] > [crio]
	I0916 14:14:17.421063  753338 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0916 14:14:17.421073  753338 command_runner.go:130] > # containers images, in this directory.
	I0916 14:14:17.421083  753338 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0916 14:14:17.421123  753338 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0916 14:14:17.421163  753338 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0916 14:14:17.421186  753338 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0916 14:14:17.421384  753338 command_runner.go:130] > # imagestore = ""
	I0916 14:14:17.421400  753338 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0916 14:14:17.421409  753338 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0916 14:14:17.421518  753338 command_runner.go:130] > storage_driver = "overlay"
	I0916 14:14:17.421534  753338 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0916 14:14:17.421544  753338 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0916 14:14:17.421550  753338 command_runner.go:130] > storage_option = [
	I0916 14:14:17.421759  753338 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0916 14:14:17.421832  753338 command_runner.go:130] > ]
	I0916 14:14:17.421848  753338 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0916 14:14:17.421857  753338 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0916 14:14:17.422192  753338 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0916 14:14:17.422206  753338 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0916 14:14:17.422216  753338 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0916 14:14:17.422224  753338 command_runner.go:130] > # always happen on a node reboot
	I0916 14:14:17.422512  753338 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0916 14:14:17.422533  753338 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0916 14:14:17.422545  753338 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0916 14:14:17.422554  753338 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0916 14:14:17.422737  753338 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0916 14:14:17.422756  753338 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0916 14:14:17.422769  753338 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0916 14:14:17.422984  753338 command_runner.go:130] > # internal_wipe = true
	I0916 14:14:17.423007  753338 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0916 14:14:17.423018  753338 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0916 14:14:17.423245  753338 command_runner.go:130] > # internal_repair = false
	I0916 14:14:17.423256  753338 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0916 14:14:17.423262  753338 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0916 14:14:17.423267  753338 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0916 14:14:17.423487  753338 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0916 14:14:17.423502  753338 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0916 14:14:17.423508  753338 command_runner.go:130] > [crio.api]
	I0916 14:14:17.423516  753338 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0916 14:14:17.423951  753338 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0916 14:14:17.423980  753338 command_runner.go:130] > # IP address on which the stream server will listen.
	I0916 14:14:17.424253  753338 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0916 14:14:17.424273  753338 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0916 14:14:17.424281  753338 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0916 14:14:17.424550  753338 command_runner.go:130] > # stream_port = "0"
	I0916 14:14:17.424567  753338 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0916 14:14:17.424902  753338 command_runner.go:130] > # stream_enable_tls = false
	I0916 14:14:17.424919  753338 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0916 14:14:17.425251  753338 command_runner.go:130] > # stream_idle_timeout = ""
	I0916 14:14:17.425267  753338 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0916 14:14:17.425273  753338 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0916 14:14:17.425277  753338 command_runner.go:130] > # minutes.
	I0916 14:14:17.425515  753338 command_runner.go:130] > # stream_tls_cert = ""
	I0916 14:14:17.425530  753338 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0916 14:14:17.425539  753338 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0916 14:14:17.425839  753338 command_runner.go:130] > # stream_tls_key = ""
	I0916 14:14:17.425855  753338 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0916 14:14:17.425865  753338 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0916 14:14:17.425885  753338 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0916 14:14:17.426078  753338 command_runner.go:130] > # stream_tls_ca = ""
	I0916 14:14:17.426098  753338 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0916 14:14:17.426250  753338 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0916 14:14:17.426270  753338 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0916 14:14:17.426446  753338 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0916 14:14:17.426462  753338 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0916 14:14:17.426470  753338 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0916 14:14:17.426477  753338 command_runner.go:130] > [crio.runtime]
	I0916 14:14:17.426487  753338 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0916 14:14:17.426498  753338 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0916 14:14:17.426507  753338 command_runner.go:130] > # "nofile=1024:2048"
	I0916 14:14:17.426517  753338 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0916 14:14:17.426596  753338 command_runner.go:130] > # default_ulimits = [
	I0916 14:14:17.426814  753338 command_runner.go:130] > # ]
	I0916 14:14:17.426830  753338 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0916 14:14:17.427109  753338 command_runner.go:130] > # no_pivot = false
	I0916 14:14:17.427123  753338 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0916 14:14:17.427133  753338 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0916 14:14:17.427430  753338 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0916 14:14:17.427453  753338 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0916 14:14:17.427465  753338 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0916 14:14:17.427475  753338 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 14:14:17.427585  753338 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0916 14:14:17.427596  753338 command_runner.go:130] > # Cgroup setting for conmon
	I0916 14:14:17.427606  753338 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0916 14:14:17.427838  753338 command_runner.go:130] > conmon_cgroup = "pod"
	I0916 14:14:17.427854  753338 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0916 14:14:17.427865  753338 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0916 14:14:17.427878  753338 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 14:14:17.427884  753338 command_runner.go:130] > conmon_env = [
	I0916 14:14:17.427951  753338 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0916 14:14:17.428007  753338 command_runner.go:130] > ]
	I0916 14:14:17.428020  753338 command_runner.go:130] > # Additional environment variables to set for all the
	I0916 14:14:17.428029  753338 command_runner.go:130] > # containers. These are overridden if set in the
	I0916 14:14:17.428041  753338 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0916 14:14:17.428881  753338 command_runner.go:130] > # default_env = [
	I0916 14:14:17.428895  753338 command_runner.go:130] > # ]
	I0916 14:14:17.428905  753338 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0916 14:14:17.428916  753338 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0916 14:14:17.428922  753338 command_runner.go:130] > # selinux = false
	I0916 14:14:17.428931  753338 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0916 14:14:17.428939  753338 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0916 14:14:17.428947  753338 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0916 14:14:17.428953  753338 command_runner.go:130] > # seccomp_profile = ""
	I0916 14:14:17.428961  753338 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0916 14:14:17.428974  753338 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0916 14:14:17.428984  753338 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0916 14:14:17.428994  753338 command_runner.go:130] > # which might increase security.
	I0916 14:14:17.429001  753338 command_runner.go:130] > # This option is currently deprecated,
	I0916 14:14:17.429011  753338 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0916 14:14:17.429020  753338 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0916 14:14:17.429032  753338 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0916 14:14:17.429045  753338 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0916 14:14:17.429059  753338 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0916 14:14:17.429072  753338 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0916 14:14:17.429079  753338 command_runner.go:130] > # This option supports live configuration reload.
	I0916 14:14:17.429096  753338 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0916 14:14:17.429110  753338 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0916 14:14:17.429123  753338 command_runner.go:130] > # the cgroup blockio controller.
	I0916 14:14:17.429133  753338 command_runner.go:130] > # blockio_config_file = ""
	I0916 14:14:17.429149  753338 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0916 14:14:17.429158  753338 command_runner.go:130] > # blockio parameters.
	I0916 14:14:17.429164  753338 command_runner.go:130] > # blockio_reload = false
	I0916 14:14:17.429175  753338 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0916 14:14:17.429181  753338 command_runner.go:130] > # irqbalance daemon.
	I0916 14:14:17.429190  753338 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0916 14:14:17.429201  753338 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0916 14:14:17.429215  753338 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0916 14:14:17.429227  753338 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0916 14:14:17.429239  753338 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0916 14:14:17.429252  753338 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0916 14:14:17.429263  753338 command_runner.go:130] > # This option supports live configuration reload.
	I0916 14:14:17.429271  753338 command_runner.go:130] > # rdt_config_file = ""
	I0916 14:14:17.429280  753338 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0916 14:14:17.429290  753338 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0916 14:14:17.429312  753338 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0916 14:14:17.429325  753338 command_runner.go:130] > # separate_pull_cgroup = ""
	I0916 14:14:17.429336  753338 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0916 14:14:17.429349  753338 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0916 14:14:17.429358  753338 command_runner.go:130] > # will be added.
	I0916 14:14:17.429367  753338 command_runner.go:130] > # default_capabilities = [
	I0916 14:14:17.429375  753338 command_runner.go:130] > # 	"CHOWN",
	I0916 14:14:17.429381  753338 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0916 14:14:17.429390  753338 command_runner.go:130] > # 	"FSETID",
	I0916 14:14:17.429396  753338 command_runner.go:130] > # 	"FOWNER",
	I0916 14:14:17.429405  753338 command_runner.go:130] > # 	"SETGID",
	I0916 14:14:17.429412  753338 command_runner.go:130] > # 	"SETUID",
	I0916 14:14:17.429420  753338 command_runner.go:130] > # 	"SETPCAP",
	I0916 14:14:17.429427  753338 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0916 14:14:17.429435  753338 command_runner.go:130] > # 	"KILL",
	I0916 14:14:17.429443  753338 command_runner.go:130] > # ]
	I0916 14:14:17.429457  753338 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0916 14:14:17.429474  753338 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0916 14:14:17.429486  753338 command_runner.go:130] > # add_inheritable_capabilities = false
	I0916 14:14:17.429501  753338 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0916 14:14:17.429513  753338 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 14:14:17.429523  753338 command_runner.go:130] > default_sysctls = [
	I0916 14:14:17.429535  753338 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0916 14:14:17.429543  753338 command_runner.go:130] > ]
	I0916 14:14:17.429551  753338 command_runner.go:130] > # List of devices on the host that a
	I0916 14:14:17.429564  753338 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0916 14:14:17.429573  753338 command_runner.go:130] > # allowed_devices = [
	I0916 14:14:17.429578  753338 command_runner.go:130] > # 	"/dev/fuse",
	I0916 14:14:17.429583  753338 command_runner.go:130] > # ]
	I0916 14:14:17.429592  753338 command_runner.go:130] > # List of additional devices. specified as
	I0916 14:14:17.429606  753338 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0916 14:14:17.429622  753338 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0916 14:14:17.429634  753338 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 14:14:17.429643  753338 command_runner.go:130] > # additional_devices = [
	I0916 14:14:17.429648  753338 command_runner.go:130] > # ]
	I0916 14:14:17.429659  753338 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0916 14:14:17.429681  753338 command_runner.go:130] > # cdi_spec_dirs = [
	I0916 14:14:17.429688  753338 command_runner.go:130] > # 	"/etc/cdi",
	I0916 14:14:17.429694  753338 command_runner.go:130] > # 	"/var/run/cdi",
	I0916 14:14:17.429699  753338 command_runner.go:130] > # ]
	I0916 14:14:17.429713  753338 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0916 14:14:17.429725  753338 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0916 14:14:17.429734  753338 command_runner.go:130] > # Defaults to false.
	I0916 14:14:17.429741  753338 command_runner.go:130] > # device_ownership_from_security_context = false
	I0916 14:14:17.429754  753338 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0916 14:14:17.429769  753338 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0916 14:14:17.429777  753338 command_runner.go:130] > # hooks_dir = [
	I0916 14:14:17.429784  753338 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0916 14:14:17.429790  753338 command_runner.go:130] > # ]
	I0916 14:14:17.429802  753338 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0916 14:14:17.429817  753338 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0916 14:14:17.429830  753338 command_runner.go:130] > # its default mounts from the following two files:
	I0916 14:14:17.429837  753338 command_runner.go:130] > #
	I0916 14:14:17.429846  753338 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0916 14:14:17.429856  753338 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0916 14:14:17.429865  753338 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0916 14:14:17.429871  753338 command_runner.go:130] > #
	I0916 14:14:17.429878  753338 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0916 14:14:17.429886  753338 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0916 14:14:17.429898  753338 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0916 14:14:17.429908  753338 command_runner.go:130] > #      only add mounts it finds in this file.
	I0916 14:14:17.429913  753338 command_runner.go:130] > #
	I0916 14:14:17.429920  753338 command_runner.go:130] > # default_mounts_file = ""
	I0916 14:14:17.429938  753338 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0916 14:14:17.429951  753338 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0916 14:14:17.429958  753338 command_runner.go:130] > pids_limit = 1024
	I0916 14:14:17.429970  753338 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0916 14:14:17.429982  753338 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0916 14:14:17.429993  753338 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0916 14:14:17.430009  753338 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0916 14:14:17.430018  753338 command_runner.go:130] > # log_size_max = -1
	I0916 14:14:17.430029  753338 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0916 14:14:17.430038  753338 command_runner.go:130] > # log_to_journald = false
	I0916 14:14:17.430050  753338 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0916 14:14:17.430060  753338 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0916 14:14:17.430071  753338 command_runner.go:130] > # Path to directory for container attach sockets.
	I0916 14:14:17.430078  753338 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0916 14:14:17.430089  753338 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0916 14:14:17.430099  753338 command_runner.go:130] > # bind_mount_prefix = ""
	I0916 14:14:17.430111  753338 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0916 14:14:17.430119  753338 command_runner.go:130] > # read_only = false
	I0916 14:14:17.430131  753338 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0916 14:14:17.430143  753338 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0916 14:14:17.430152  753338 command_runner.go:130] > # live configuration reload.
	I0916 14:14:17.430163  753338 command_runner.go:130] > # log_level = "info"
	I0916 14:14:17.430175  753338 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0916 14:14:17.430186  753338 command_runner.go:130] > # This option supports live configuration reload.
	I0916 14:14:17.430195  753338 command_runner.go:130] > # log_filter = ""
	I0916 14:14:17.430206  753338 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0916 14:14:17.430221  753338 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0916 14:14:17.430230  753338 command_runner.go:130] > # separated by comma.
	I0916 14:14:17.430244  753338 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 14:14:17.430253  753338 command_runner.go:130] > # uid_mappings = ""
	I0916 14:14:17.430262  753338 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0916 14:14:17.430275  753338 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0916 14:14:17.430282  753338 command_runner.go:130] > # separated by comma.
	I0916 14:14:17.430292  753338 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 14:14:17.430300  753338 command_runner.go:130] > # gid_mappings = ""
	I0916 14:14:17.430311  753338 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0916 14:14:17.430323  753338 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 14:14:17.430340  753338 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 14:14:17.430356  753338 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 14:14:17.430366  753338 command_runner.go:130] > # minimum_mappable_uid = -1
	I0916 14:14:17.430378  753338 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0916 14:14:17.430389  753338 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 14:14:17.430402  753338 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 14:14:17.430419  753338 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 14:14:17.430429  753338 command_runner.go:130] > # minimum_mappable_gid = -1
	I0916 14:14:17.430438  753338 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0916 14:14:17.430450  753338 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0916 14:14:17.430459  753338 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0916 14:14:17.430468  753338 command_runner.go:130] > # ctr_stop_timeout = 30
	I0916 14:14:17.430477  753338 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0916 14:14:17.430489  753338 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0916 14:14:17.430499  753338 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0916 14:14:17.430510  753338 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0916 14:14:17.430519  753338 command_runner.go:130] > drop_infra_ctr = false
	I0916 14:14:17.430531  753338 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0916 14:14:17.430542  753338 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0916 14:14:17.430558  753338 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0916 14:14:17.430567  753338 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0916 14:14:17.430577  753338 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0916 14:14:17.430589  753338 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0916 14:14:17.430600  753338 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0916 14:14:17.430610  753338 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0916 14:14:17.430623  753338 command_runner.go:130] > # shared_cpuset = ""
	I0916 14:14:17.430635  753338 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0916 14:14:17.430645  753338 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0916 14:14:17.430655  753338 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0916 14:14:17.430668  753338 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0916 14:14:17.430678  753338 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0916 14:14:17.430691  753338 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0916 14:14:17.430703  753338 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0916 14:14:17.430712  753338 command_runner.go:130] > # enable_criu_support = false
	I0916 14:14:17.430723  753338 command_runner.go:130] > # Enable/disable the generation of the container,
	I0916 14:14:17.430737  753338 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0916 14:14:17.430747  753338 command_runner.go:130] > # enable_pod_events = false
	I0916 14:14:17.430757  753338 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 14:14:17.430770  753338 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 14:14:17.430780  753338 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0916 14:14:17.430787  753338 command_runner.go:130] > # default_runtime = "runc"
	I0916 14:14:17.430797  753338 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0916 14:14:17.430809  753338 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0916 14:14:17.430824  753338 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0916 14:14:17.430835  753338 command_runner.go:130] > # creation as a file is not desired either.
	I0916 14:14:17.430849  753338 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0916 14:14:17.430859  753338 command_runner.go:130] > # the hostname is being managed dynamically.
	I0916 14:14:17.430869  753338 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0916 14:14:17.430877  753338 command_runner.go:130] > # ]
	I0916 14:14:17.430886  753338 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0916 14:14:17.430902  753338 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0916 14:14:17.430914  753338 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0916 14:14:17.430922  753338 command_runner.go:130] > # Each entry in the table should follow the format:
	I0916 14:14:17.430930  753338 command_runner.go:130] > #
	I0916 14:14:17.430938  753338 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0916 14:14:17.430949  753338 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0916 14:14:17.430982  753338 command_runner.go:130] > # runtime_type = "oci"
	I0916 14:14:17.430996  753338 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0916 14:14:17.431003  753338 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0916 14:14:17.431010  753338 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0916 14:14:17.431021  753338 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0916 14:14:17.431027  753338 command_runner.go:130] > # monitor_env = []
	I0916 14:14:17.431038  753338 command_runner.go:130] > # privileged_without_host_devices = false
	I0916 14:14:17.431045  753338 command_runner.go:130] > # allowed_annotations = []
	I0916 14:14:17.431058  753338 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0916 14:14:17.431066  753338 command_runner.go:130] > # Where:
	I0916 14:14:17.431075  753338 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0916 14:14:17.431087  753338 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0916 14:14:17.431099  753338 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0916 14:14:17.431111  753338 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0916 14:14:17.431120  753338 command_runner.go:130] > #   in $PATH.
	I0916 14:14:17.431130  753338 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0916 14:14:17.431140  753338 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0916 14:14:17.431154  753338 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0916 14:14:17.431164  753338 command_runner.go:130] > #   state.
	I0916 14:14:17.431174  753338 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0916 14:14:17.431186  753338 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0916 14:14:17.431198  753338 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0916 14:14:17.431210  753338 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0916 14:14:17.431222  753338 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0916 14:14:17.431232  753338 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0916 14:14:17.431243  753338 command_runner.go:130] > #   The currently recognized values are:
	I0916 14:14:17.431256  753338 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0916 14:14:17.431271  753338 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0916 14:14:17.431284  753338 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0916 14:14:17.431293  753338 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0916 14:14:17.431308  753338 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0916 14:14:17.431321  753338 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0916 14:14:17.431330  753338 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0916 14:14:17.431343  753338 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0916 14:14:17.431355  753338 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0916 14:14:17.431367  753338 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0916 14:14:17.431377  753338 command_runner.go:130] > #   deprecated option "conmon".
	I0916 14:14:17.431389  753338 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0916 14:14:17.431399  753338 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0916 14:14:17.431413  753338 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0916 14:14:17.431423  753338 command_runner.go:130] > #   should be moved to the container's cgroup
	I0916 14:14:17.431435  753338 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0916 14:14:17.431446  753338 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0916 14:14:17.431456  753338 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0916 14:14:17.431464  753338 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0916 14:14:17.431467  753338 command_runner.go:130] > #
	I0916 14:14:17.431472  753338 command_runner.go:130] > # Using the seccomp notifier feature:
	I0916 14:14:17.431477  753338 command_runner.go:130] > #
	I0916 14:14:17.431483  753338 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0916 14:14:17.431490  753338 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0916 14:14:17.431498  753338 command_runner.go:130] > #
	I0916 14:14:17.431508  753338 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0916 14:14:17.431518  753338 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0916 14:14:17.431526  753338 command_runner.go:130] > #
	I0916 14:14:17.431536  753338 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0916 14:14:17.431544  753338 command_runner.go:130] > # feature.
	I0916 14:14:17.431549  753338 command_runner.go:130] > #
	I0916 14:14:17.431560  753338 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0916 14:14:17.431572  753338 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0916 14:14:17.431586  753338 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0916 14:14:17.431600  753338 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0916 14:14:17.431613  753338 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0916 14:14:17.431625  753338 command_runner.go:130] > #
	I0916 14:14:17.431634  753338 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0916 14:14:17.431643  753338 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0916 14:14:17.431652  753338 command_runner.go:130] > #
	I0916 14:14:17.431661  753338 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0916 14:14:17.431670  753338 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0916 14:14:17.431679  753338 command_runner.go:130] > #
	I0916 14:14:17.431688  753338 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0916 14:14:17.431700  753338 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0916 14:14:17.431711  753338 command_runner.go:130] > # limitation.
	I0916 14:14:17.431719  753338 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0916 14:14:17.431728  753338 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0916 14:14:17.431734  753338 command_runner.go:130] > runtime_type = "oci"
	I0916 14:14:17.431743  753338 command_runner.go:130] > runtime_root = "/run/runc"
	I0916 14:14:17.431750  753338 command_runner.go:130] > runtime_config_path = ""
	I0916 14:14:17.431761  753338 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0916 14:14:17.431770  753338 command_runner.go:130] > monitor_cgroup = "pod"
	I0916 14:14:17.431777  753338 command_runner.go:130] > monitor_exec_cgroup = ""
	I0916 14:14:17.431786  753338 command_runner.go:130] > monitor_env = [
	I0916 14:14:17.431796  753338 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0916 14:14:17.431805  753338 command_runner.go:130] > ]
	I0916 14:14:17.431816  753338 command_runner.go:130] > privileged_without_host_devices = false
	I0916 14:14:17.431826  753338 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0916 14:14:17.431837  753338 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0916 14:14:17.431851  753338 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0916 14:14:17.431865  753338 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0916 14:14:17.431879  753338 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0916 14:14:17.431891  753338 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0916 14:14:17.431913  753338 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0916 14:14:17.431930  753338 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0916 14:14:17.431940  753338 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0916 14:14:17.431952  753338 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0916 14:14:17.431960  753338 command_runner.go:130] > # Example:
	I0916 14:14:17.431967  753338 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0916 14:14:17.431977  753338 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0916 14:14:17.431987  753338 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0916 14:14:17.431995  753338 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0916 14:14:17.432004  753338 command_runner.go:130] > # cpuset = 0
	I0916 14:14:17.432010  753338 command_runner.go:130] > # cpushares = "0-1"
	I0916 14:14:17.432019  753338 command_runner.go:130] > # Where:
	I0916 14:14:17.432028  753338 command_runner.go:130] > # The workload name is workload-type.
	I0916 14:14:17.432041  753338 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0916 14:14:17.432052  753338 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0916 14:14:17.432062  753338 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0916 14:14:17.432075  753338 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0916 14:14:17.432086  753338 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0916 14:14:17.432096  753338 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0916 14:14:17.432106  753338 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0916 14:14:17.432116  753338 command_runner.go:130] > # Default value is set to true
	I0916 14:14:17.432123  753338 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0916 14:14:17.432130  753338 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0916 14:14:17.432137  753338 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0916 14:14:17.432142  753338 command_runner.go:130] > # Default value is set to 'false'
	I0916 14:14:17.432148  753338 command_runner.go:130] > # disable_hostport_mapping = false
	I0916 14:14:17.432155  753338 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0916 14:14:17.432160  753338 command_runner.go:130] > #
	I0916 14:14:17.432165  753338 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0916 14:14:17.432173  753338 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0916 14:14:17.432180  753338 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0916 14:14:17.432187  753338 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0916 14:14:17.432195  753338 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0916 14:14:17.432200  753338 command_runner.go:130] > [crio.image]
	I0916 14:14:17.432209  753338 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0916 14:14:17.432220  753338 command_runner.go:130] > # default_transport = "docker://"
	I0916 14:14:17.432238  753338 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0916 14:14:17.432248  753338 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0916 14:14:17.432253  753338 command_runner.go:130] > # global_auth_file = ""
	I0916 14:14:17.432262  753338 command_runner.go:130] > # The image used to instantiate infra containers.
	I0916 14:14:17.432269  753338 command_runner.go:130] > # This option supports live configuration reload.
	I0916 14:14:17.432276  753338 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0916 14:14:17.432285  753338 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0916 14:14:17.432294  753338 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0916 14:14:17.432303  753338 command_runner.go:130] > # This option supports live configuration reload.
	I0916 14:14:17.432310  753338 command_runner.go:130] > # pause_image_auth_file = ""
	I0916 14:14:17.432319  753338 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0916 14:14:17.432328  753338 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0916 14:14:17.432338  753338 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0916 14:14:17.432346  753338 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0916 14:14:17.432352  753338 command_runner.go:130] > # pause_command = "/pause"
	I0916 14:14:17.432361  753338 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0916 14:14:17.432370  753338 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0916 14:14:17.432379  753338 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0916 14:14:17.432390  753338 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0916 14:14:17.432399  753338 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0916 14:14:17.432408  753338 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0916 14:14:17.432418  753338 command_runner.go:130] > # pinned_images = [
	I0916 14:14:17.432424  753338 command_runner.go:130] > # ]
	I0916 14:14:17.432434  753338 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0916 14:14:17.432452  753338 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0916 14:14:17.432465  753338 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0916 14:14:17.432477  753338 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0916 14:14:17.432488  753338 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0916 14:14:17.432498  753338 command_runner.go:130] > # signature_policy = ""
	I0916 14:14:17.432510  753338 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0916 14:14:17.432523  753338 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0916 14:14:17.432536  753338 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0916 14:14:17.432549  753338 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0916 14:14:17.432563  753338 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0916 14:14:17.432573  753338 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0916 14:14:17.432588  753338 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0916 14:14:17.432601  753338 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0916 14:14:17.432609  753338 command_runner.go:130] > # changing them here.
	I0916 14:14:17.432623  753338 command_runner.go:130] > # insecure_registries = [
	I0916 14:14:17.432631  753338 command_runner.go:130] > # ]
	I0916 14:14:17.432640  753338 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0916 14:14:17.432649  753338 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0916 14:14:17.432656  753338 command_runner.go:130] > # image_volumes = "mkdir"
	I0916 14:14:17.432666  753338 command_runner.go:130] > # Temporary directory to use for storing big files
	I0916 14:14:17.432676  753338 command_runner.go:130] > # big_files_temporary_dir = ""
	I0916 14:14:17.432688  753338 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0916 14:14:17.432697  753338 command_runner.go:130] > # CNI plugins.
	I0916 14:14:17.432705  753338 command_runner.go:130] > [crio.network]
	I0916 14:14:17.432719  753338 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0916 14:14:17.432729  753338 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0916 14:14:17.432738  753338 command_runner.go:130] > # cni_default_network = ""
	I0916 14:14:17.432746  753338 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0916 14:14:17.432756  753338 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0916 14:14:17.432767  753338 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0916 14:14:17.432776  753338 command_runner.go:130] > # plugin_dirs = [
	I0916 14:14:17.432784  753338 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0916 14:14:17.432791  753338 command_runner.go:130] > # ]
	I0916 14:14:17.432799  753338 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0916 14:14:17.432812  753338 command_runner.go:130] > [crio.metrics]
	I0916 14:14:17.432823  753338 command_runner.go:130] > # Globally enable or disable metrics support.
	I0916 14:14:17.432831  753338 command_runner.go:130] > enable_metrics = true
	I0916 14:14:17.432841  753338 command_runner.go:130] > # Specify enabled metrics collectors.
	I0916 14:14:17.432851  753338 command_runner.go:130] > # Per default all metrics are enabled.
	I0916 14:14:17.432863  753338 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0916 14:14:17.432875  753338 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0916 14:14:17.432887  753338 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0916 14:14:17.432898  753338 command_runner.go:130] > # metrics_collectors = [
	I0916 14:14:17.432907  753338 command_runner.go:130] > # 	"operations",
	I0916 14:14:17.432913  753338 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0916 14:14:17.432926  753338 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0916 14:14:17.432932  753338 command_runner.go:130] > # 	"operations_errors",
	I0916 14:14:17.432939  753338 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0916 14:14:17.432949  753338 command_runner.go:130] > # 	"image_pulls_by_name",
	I0916 14:14:17.432956  753338 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0916 14:14:17.432965  753338 command_runner.go:130] > # 	"image_pulls_failures",
	I0916 14:14:17.432972  753338 command_runner.go:130] > # 	"image_pulls_successes",
	I0916 14:14:17.432979  753338 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0916 14:14:17.432988  753338 command_runner.go:130] > # 	"image_layer_reuse",
	I0916 14:14:17.432996  753338 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0916 14:14:17.433008  753338 command_runner.go:130] > # 	"containers_oom_total",
	I0916 14:14:17.433018  753338 command_runner.go:130] > # 	"containers_oom",
	I0916 14:14:17.433025  753338 command_runner.go:130] > # 	"processes_defunct",
	I0916 14:14:17.433034  753338 command_runner.go:130] > # 	"operations_total",
	I0916 14:14:17.433041  753338 command_runner.go:130] > # 	"operations_latency_seconds",
	I0916 14:14:17.433052  753338 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0916 14:14:17.433062  753338 command_runner.go:130] > # 	"operations_errors_total",
	I0916 14:14:17.433069  753338 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0916 14:14:17.433079  753338 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0916 14:14:17.433088  753338 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0916 14:14:17.433095  753338 command_runner.go:130] > # 	"image_pulls_success_total",
	I0916 14:14:17.433103  753338 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0916 14:14:17.433108  753338 command_runner.go:130] > # 	"containers_oom_count_total",
	I0916 14:14:17.433115  753338 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0916 14:14:17.433119  753338 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0916 14:14:17.433124  753338 command_runner.go:130] > # ]
	I0916 14:14:17.433131  753338 command_runner.go:130] > # The port on which the metrics server will listen.
	I0916 14:14:17.433137  753338 command_runner.go:130] > # metrics_port = 9090
	I0916 14:14:17.433142  753338 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0916 14:14:17.433147  753338 command_runner.go:130] > # metrics_socket = ""
	I0916 14:14:17.433153  753338 command_runner.go:130] > # The certificate for the secure metrics server.
	I0916 14:14:17.433160  753338 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0916 14:14:17.433167  753338 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0916 14:14:17.433174  753338 command_runner.go:130] > # certificate on any modification event.
	I0916 14:14:17.433178  753338 command_runner.go:130] > # metrics_cert = ""
	I0916 14:14:17.433185  753338 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0916 14:14:17.433190  753338 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0916 14:14:17.433196  753338 command_runner.go:130] > # metrics_key = ""
	I0916 14:14:17.433201  753338 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0916 14:14:17.433206  753338 command_runner.go:130] > [crio.tracing]
	I0916 14:14:17.433211  753338 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0916 14:14:17.433217  753338 command_runner.go:130] > # enable_tracing = false
	I0916 14:14:17.433223  753338 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0916 14:14:17.433229  753338 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0916 14:14:17.433236  753338 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0916 14:14:17.433243  753338 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0916 14:14:17.433247  753338 command_runner.go:130] > # CRI-O NRI configuration.
	I0916 14:14:17.433250  753338 command_runner.go:130] > [crio.nri]
	I0916 14:14:17.433254  753338 command_runner.go:130] > # Globally enable or disable NRI.
	I0916 14:14:17.433258  753338 command_runner.go:130] > # enable_nri = false
	I0916 14:14:17.433262  753338 command_runner.go:130] > # NRI socket to listen on.
	I0916 14:14:17.433265  753338 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0916 14:14:17.433269  753338 command_runner.go:130] > # NRI plugin directory to use.
	I0916 14:14:17.433273  753338 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0916 14:14:17.433282  753338 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0916 14:14:17.433287  753338 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0916 14:14:17.433291  753338 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0916 14:14:17.433295  753338 command_runner.go:130] > # nri_disable_connections = false
	I0916 14:14:17.433300  753338 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0916 14:14:17.433305  753338 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0916 14:14:17.433309  753338 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0916 14:14:17.433313  753338 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0916 14:14:17.433318  753338 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0916 14:14:17.433322  753338 command_runner.go:130] > [crio.stats]
	I0916 14:14:17.433328  753338 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0916 14:14:17.433333  753338 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0916 14:14:17.433338  753338 command_runner.go:130] > # stats_collection_period = 0
	I0916 14:14:17.434174  753338 command_runner.go:130] ! time="2024-09-16 14:14:17.386005595Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0916 14:14:17.434200  753338 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0916 14:14:17.434297  753338 cni.go:84] Creating CNI manager for ""
	I0916 14:14:17.434313  753338 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0916 14:14:17.434326  753338 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 14:14:17.434353  753338 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.163 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-561755 NodeName:multinode-561755 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 14:14:17.434498  753338 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.163
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-561755"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 14:14:17.434566  753338 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 14:14:17.445630  753338 command_runner.go:130] > kubeadm
	I0916 14:14:17.445647  753338 command_runner.go:130] > kubectl
	I0916 14:14:17.445653  753338 command_runner.go:130] > kubelet
	I0916 14:14:17.445690  753338 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 14:14:17.445744  753338 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 14:14:17.455924  753338 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0916 14:14:17.471996  753338 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 14:14:17.487794  753338 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0916 14:14:17.503932  753338 ssh_runner.go:195] Run: grep 192.168.39.163	control-plane.minikube.internal$ /etc/hosts
	I0916 14:14:17.507556  753338 command_runner.go:130] > 192.168.39.163	control-plane.minikube.internal
	I0916 14:14:17.507745  753338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 14:14:17.641121  753338 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 14:14:17.655275  753338 certs.go:68] Setting up /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/multinode-561755 for IP: 192.168.39.163
	I0916 14:14:17.655297  753338 certs.go:194] generating shared ca certs ...
	I0916 14:14:17.655314  753338 certs.go:226] acquiring lock for ca certs: {Name:mk25b35916ff3ff3777938e3e2b7794965f8a707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 14:14:17.655551  753338 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key
	I0916 14:14:17.655593  753338 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key
	I0916 14:14:17.655604  753338 certs.go:256] generating profile certs ...
	I0916 14:14:17.655685  753338 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/multinode-561755/client.key
	I0916 14:14:17.655765  753338 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/multinode-561755/apiserver.key.7781cfba
	I0916 14:14:17.655813  753338 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/multinode-561755/proxy-client.key
	I0916 14:14:17.655824  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 14:14:17.655843  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 14:14:17.655858  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 14:14:17.655869  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 14:14:17.655880  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/multinode-561755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 14:14:17.655891  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/multinode-561755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 14:14:17.655906  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/multinode-561755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 14:14:17.655917  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/multinode-561755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 14:14:17.655966  753338 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem (1338 bytes)
	W0916 14:14:17.655992  753338 certs.go:480] ignoring /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544_empty.pem, impossibly tiny 0 bytes
	I0916 14:14:17.656001  753338 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 14:14:17.656025  753338 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem (1082 bytes)
	I0916 14:14:17.656047  753338 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem (1123 bytes)
	I0916 14:14:17.656068  753338 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem (1679 bytes)
	I0916 14:14:17.656103  753338 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 14:14:17.656135  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> /usr/share/ca-certificates/7205442.pem
	I0916 14:14:17.656147  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 14:14:17.656159  753338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem -> /usr/share/ca-certificates/720544.pem
	I0916 14:14:17.656767  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 14:14:17.679159  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 14:14:17.701944  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 14:14:17.724454  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 14:14:17.747343  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/multinode-561755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 14:14:17.769899  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/multinode-561755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 14:14:17.792376  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/multinode-561755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 14:14:17.814743  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/multinode-561755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 14:14:17.837654  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /usr/share/ca-certificates/7205442.pem (1708 bytes)
	I0916 14:14:17.860202  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 14:14:17.882325  753338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem --> /usr/share/ca-certificates/720544.pem (1338 bytes)
	I0916 14:14:17.904362  753338 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 14:14:17.920182  753338 ssh_runner.go:195] Run: openssl version
	I0916 14:14:17.925765  753338 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0916 14:14:17.925836  753338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7205442.pem && ln -fs /usr/share/ca-certificates/7205442.pem /etc/ssl/certs/7205442.pem"
	I0916 14:14:17.937274  753338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7205442.pem
	I0916 14:14:17.941339  753338 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 13:33 /usr/share/ca-certificates/7205442.pem
	I0916 14:14:17.941438  753338 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 13:33 /usr/share/ca-certificates/7205442.pem
	I0916 14:14:17.941484  753338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7205442.pem
	I0916 14:14:17.946670  753338 command_runner.go:130] > 3ec20f2e
	I0916 14:14:17.946724  753338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7205442.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 14:14:17.955973  753338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 14:14:17.966537  753338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 14:14:17.970510  753338 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 12:53 /usr/share/ca-certificates/minikubeCA.pem
	I0916 14:14:17.970702  753338 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 12:53 /usr/share/ca-certificates/minikubeCA.pem
	I0916 14:14:17.970737  753338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 14:14:17.975687  753338 command_runner.go:130] > b5213941
	I0916 14:14:17.975982  753338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 14:14:17.985059  753338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/720544.pem && ln -fs /usr/share/ca-certificates/720544.pem /etc/ssl/certs/720544.pem"
	I0916 14:14:17.995368  753338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/720544.pem
	I0916 14:14:17.999419  753338 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 13:33 /usr/share/ca-certificates/720544.pem
	I0916 14:14:17.999644  753338 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 13:33 /usr/share/ca-certificates/720544.pem
	I0916 14:14:17.999696  753338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/720544.pem
	I0916 14:14:18.005130  753338 command_runner.go:130] > 51391683
	I0916 14:14:18.005181  753338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/720544.pem /etc/ssl/certs/51391683.0"
	I0916 14:14:18.014481  753338 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 14:14:18.018718  753338 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 14:14:18.018742  753338 command_runner.go:130] >   Size: 1172      	Blocks: 8          IO Block: 4096   regular file
	I0916 14:14:18.018750  753338 command_runner.go:130] > Device: 253,1	Inode: 9431080     Links: 1
	I0916 14:14:18.018759  753338 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 14:14:18.018768  753338 command_runner.go:130] > Access: 2024-09-16 14:07:37.838277121 +0000
	I0916 14:14:18.018775  753338 command_runner.go:130] > Modify: 2024-09-16 14:07:37.838277121 +0000
	I0916 14:14:18.018784  753338 command_runner.go:130] > Change: 2024-09-16 14:07:37.838277121 +0000
	I0916 14:14:18.018792  753338 command_runner.go:130] >  Birth: 2024-09-16 14:07:37.838277121 +0000
	I0916 14:14:18.018843  753338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 14:14:18.024179  753338 command_runner.go:130] > Certificate will not expire
	I0916 14:14:18.024241  753338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 14:14:18.029492  753338 command_runner.go:130] > Certificate will not expire
	I0916 14:14:18.029542  753338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 14:14:18.035041  753338 command_runner.go:130] > Certificate will not expire
	I0916 14:14:18.035095  753338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 14:14:18.040553  753338 command_runner.go:130] > Certificate will not expire
	I0916 14:14:18.040900  753338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 14:14:18.046062  753338 command_runner.go:130] > Certificate will not expire
	I0916 14:14:18.046117  753338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 14:14:18.051110  753338 command_runner.go:130] > Certificate will not expire
	I0916 14:14:18.051363  753338 kubeadm.go:392] StartCluster: {Name:multinode-561755 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-561755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.34 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.132 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 14:14:18.051472  753338 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 14:14:18.051510  753338 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 14:14:18.092004  753338 command_runner.go:130] > 038d0db591c9e5abc920c53e11e368e03ce9f5f56c252fe66d6adca7aecc76c7
	I0916 14:14:18.092031  753338 command_runner.go:130] > 481d5f837d21d98e89bdf18bf25bb6d2f3c38cf20ef42ae7c231defcb5ab24e0
	I0916 14:14:18.092038  753338 command_runner.go:130] > ad6237280bcbc8d08d158841602d786f89ad8b2507cbf2211ac22fbfedfd244a
	I0916 14:14:18.092044  753338 command_runner.go:130] > 9bbf062b56098221043af49349f3515a3514781797b5351608741e161512e0aa
	I0916 14:14:18.092049  753338 command_runner.go:130] > ffe27a6ccf80fc83aa095c1981ef41d89878447fbeff8ce50858c52630c320ae
	I0916 14:14:18.092055  753338 command_runner.go:130] > 70cdfc29b297091f9e9077b3d8748dc5e8b5154ad036d17d7e2e57fb6a90053a
	I0916 14:14:18.092060  753338 command_runner.go:130] > 3e77a439b0e91f9361c7cea812056c652d036a67b2c7e9cf555850d1b1cc43c0
	I0916 14:14:18.092067  753338 command_runner.go:130] > b4d468e417dd8afa14df3147175ab51461c530334533dceb411e6decb152c690
	I0916 14:14:18.092090  753338 cri.go:89] found id: "038d0db591c9e5abc920c53e11e368e03ce9f5f56c252fe66d6adca7aecc76c7"
	I0916 14:14:18.092099  753338 cri.go:89] found id: "481d5f837d21d98e89bdf18bf25bb6d2f3c38cf20ef42ae7c231defcb5ab24e0"
	I0916 14:14:18.092102  753338 cri.go:89] found id: "ad6237280bcbc8d08d158841602d786f89ad8b2507cbf2211ac22fbfedfd244a"
	I0916 14:14:18.092108  753338 cri.go:89] found id: "9bbf062b56098221043af49349f3515a3514781797b5351608741e161512e0aa"
	I0916 14:14:18.092111  753338 cri.go:89] found id: "ffe27a6ccf80fc83aa095c1981ef41d89878447fbeff8ce50858c52630c320ae"
	I0916 14:14:18.092115  753338 cri.go:89] found id: "70cdfc29b297091f9e9077b3d8748dc5e8b5154ad036d17d7e2e57fb6a90053a"
	I0916 14:14:18.092119  753338 cri.go:89] found id: "3e77a439b0e91f9361c7cea812056c652d036a67b2c7e9cf555850d1b1cc43c0"
	I0916 14:14:18.092122  753338 cri.go:89] found id: "b4d468e417dd8afa14df3147175ab51461c530334533dceb411e6decb152c690"
	I0916 14:14:18.092125  753338 cri.go:89] found id: ""
	I0916 14:14:18.092166  753338 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 16 14:18:26 multinode-561755 crio[2716]: time="2024-09-16 14:18:26.437331686Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496306437306286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=109c79d7-d837-41a7-8211-2103a474cca1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:18:26 multinode-561755 crio[2716]: time="2024-09-16 14:18:26.437898133Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e828fcd6-93be-4fa5-967a-512dd15597f1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:18:26 multinode-561755 crio[2716]: time="2024-09-16 14:18:26.437953869Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e828fcd6-93be-4fa5-967a-512dd15597f1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:18:26 multinode-561755 crio[2716]: time="2024-09-16 14:18:26.438367486Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7ecb82798b3b798b425f376ae69fceea56f0ff4ea945891f45ae8bdbe5d6159,PodSandboxId:edcf6a70f78f56150c7a606ed3ddba1d71be71bebe7a7272b781b0fcc0886c8f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726496098865377882,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-f9c5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 45b527f4-85bd-412f-ae54-bcce15672385,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58508acb748554e9375a653bdc562145ea2c3e24417df72dba1116bd07a16585,PodSandboxId:3758466f290e44a3a951ed6e050c24645b070ae2a9a92f7a19e85bbfb58b5f3d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726496065292622672,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t6sh4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97f87f14-777b-4513-95d8-c8f12b26a6db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b09e26e5b947e1ad4cc55a4d2eef52bc565981127aa468413be01645265181a,PodSandboxId:01046b7ed697e658c7b5337b9968ea7f890f54a7fd71b83d926bdc3acb1bd2e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726496065179644187,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 131fe3a3-c839-45e9-af8b-eb2775d07571,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adf4d60b2f20123ef96739e2427e8fdc83ffcbc541e89f974a0991aa5ef71cc4,PodSandboxId:923f430b712dcf0744d91386ed8b4b99be5d9d67bc6fa238665c123ca09273f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726496065197877158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qgmxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f4aec2-b2c0-4d56-8d4b-03aefe3855d1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{
\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6732202a9735ad240ad594daeba3c99acbd6041fb5330c5414718e5a2531b5eb,PodSandboxId:3d1f57513971b5f066cd08d5820a0b5e39de65af5978e46bd85adae9b1d12d1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726496065128155803,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fz92k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef55133b-6cf4-4131-b485-69d699df4f0e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c48dc4407b5424dafcfc720fbc1d0b916236aadc82242cdc895ec6156be7f2,PodSandboxId:c70dd677d11fa597d6ec78222b711e79639b3938ad7d47bb86f3acbbdea63a5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726496060309122005,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed229b51306c92dd9accb501990f07f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2341c5103f3a7f29647d1871e30f4f764af2fea16ad0d761abd5df235ac593,PodSandboxId:f0ec96fa05021b468b9fa53034fe3a6b121b49539029a59cab7770d299c85ae2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726496060317999658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef3cfc34b5d3923d5c251e8166c360e2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b454d7bb255716c709ab1373da81c6a5f05e50514d5f91d32c0590f8413eba04,PodSandboxId:c470884ef1926ebb19164fb75f7de177126a80beb130e9d7d5534f3eb77a1413,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726496060242162281,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756c6d67643bb2ee70f40f961c02d740,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c82f6eb6f5f32910d140da76d9260949ee3401895d9edebe51c819564f920427,PodSandboxId:4baa3d76d236be2f97bd8c0ee80559a046e7c5263fada00baad316d4c6011978,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726496060234639826,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b5a13669d3298607e02d3516913a0e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f142b37f6ad4736ca2c31a48410fb0a0763cfb3e5f326abdb05f4d160d17137d,PodSandboxId:ca61839e3c0689bd7a36b88485c1f9710edfab094c7b0f910c9d04fd33f6c78b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726495738198748013,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-f9c5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 45b527f4-85bd-412f-ae54-bcce15672385,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038d0db591c9e5abc920c53e11e368e03ce9f5f56c252fe66d6adca7aecc76c7,PodSandboxId:ddc303bed82d694be4fdb59d47e89bf53a38cda5349e4c6ddf817f0d0bc6f0e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726495684302842989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qgmxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f4aec2-b2c0-4d56-8d4b-03aefe3855d1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:481d5f837d21d98e89bdf18bf25bb6d2f3c38cf20ef42ae7c231defcb5ab24e0,PodSandboxId:01cf246881f39153fbcdf2784b9c50c5a67d1281b40e6236d0d1e375223857b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726495684257143168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 131fe3a3-c839-45e9-af8b-eb2775d07571,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6237280bcbc8d08d158841602d786f89ad8b2507cbf2211ac22fbfedfd244a,PodSandboxId:020a5cb0db3168c3b25b11970fcb1c1ad28cf13d1b6b50dc2b548f9a98a77a11,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726495672528979913,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t6sh4,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 97f87f14-777b-4513-95d8-c8f12b26a6db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bbf062b56098221043af49349f3515a3514781797b5351608741e161512e0aa,PodSandboxId:23d0c9f0f0ead39e68d0053b8386167c02b312e7a2474465a4d48f17effb9502,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726495672364897137,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fz92k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef55133b-6cf4-4131-b485
-69d699df4f0e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e77a439b0e91f9361c7cea812056c652d036a67b2c7e9cf555850d1b1cc43c0,PodSandboxId:34c92d6e01422106edf0e9a26d493fb92c99d33a667e42c6ea7b220e6de1a1d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726495661966083067,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef3cfc34b5d3923d5c251e8166c360e2,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffe27a6ccf80fc83aa095c1981ef41d89878447fbeff8ce50858c52630c320ae,PodSandboxId:6d3d28ba2d9406cb9cb83c3bf908c731c255da93e54452d7502a7044d19e33dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726495661987657499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed229b51306c92dd9accb501990f07f,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70cdfc29b297091f9e9077b3d8748dc5e8b5154ad036d17d7e2e57fb6a90053a,PodSandboxId:f5f8f6ffee17612e8e16f3f376d0b36addedfe5531cd4cd5bd68f8157cfd394c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726495661985766604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756c6d67643bb2ee70f40f961c02d740,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d468e417dd8afa14df3147175ab51461c530334533dceb411e6decb152c690,PodSandboxId:f6a1100542c6686c11f8bcf8c484bec30f0b16f87a0529598cb2074670af80b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726495661919311623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b5a13669d3298607e02d3516913a0e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e828fcd6-93be-4fa5-967a-512dd15597f1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:18:26 multinode-561755 crio[2716]: time="2024-09-16 14:18:26.480807945Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=975c65d7-c85a-4724-b284-77e6c857409a name=/runtime.v1.RuntimeService/Version
	Sep 16 14:18:26 multinode-561755 crio[2716]: time="2024-09-16 14:18:26.480880249Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=975c65d7-c85a-4724-b284-77e6c857409a name=/runtime.v1.RuntimeService/Version
	Sep 16 14:18:26 multinode-561755 crio[2716]: time="2024-09-16 14:18:26.482289267Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=595d191d-1904-4796-a396-91056366790c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:18:26 multinode-561755 crio[2716]: time="2024-09-16 14:18:26.482650482Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496306482630045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=595d191d-1904-4796-a396-91056366790c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:18:26 multinode-561755 crio[2716]: time="2024-09-16 14:18:26.483494705Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0136f058-5144-44f5-ae22-5480d037e177 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:18:26 multinode-561755 crio[2716]: time="2024-09-16 14:18:26.483544819Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0136f058-5144-44f5-ae22-5480d037e177 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:18:26 multinode-561755 crio[2716]: time="2024-09-16 14:18:26.483864580Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7ecb82798b3b798b425f376ae69fceea56f0ff4ea945891f45ae8bdbe5d6159,PodSandboxId:edcf6a70f78f56150c7a606ed3ddba1d71be71bebe7a7272b781b0fcc0886c8f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726496098865377882,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-f9c5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 45b527f4-85bd-412f-ae54-bcce15672385,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58508acb748554e9375a653bdc562145ea2c3e24417df72dba1116bd07a16585,PodSandboxId:3758466f290e44a3a951ed6e050c24645b070ae2a9a92f7a19e85bbfb58b5f3d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726496065292622672,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t6sh4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97f87f14-777b-4513-95d8-c8f12b26a6db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b09e26e5b947e1ad4cc55a4d2eef52bc565981127aa468413be01645265181a,PodSandboxId:01046b7ed697e658c7b5337b9968ea7f890f54a7fd71b83d926bdc3acb1bd2e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726496065179644187,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 131fe3a3-c839-45e9-af8b-eb2775d07571,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adf4d60b2f20123ef96739e2427e8fdc83ffcbc541e89f974a0991aa5ef71cc4,PodSandboxId:923f430b712dcf0744d91386ed8b4b99be5d9d67bc6fa238665c123ca09273f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726496065197877158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qgmxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f4aec2-b2c0-4d56-8d4b-03aefe3855d1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{
\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6732202a9735ad240ad594daeba3c99acbd6041fb5330c5414718e5a2531b5eb,PodSandboxId:3d1f57513971b5f066cd08d5820a0b5e39de65af5978e46bd85adae9b1d12d1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726496065128155803,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fz92k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef55133b-6cf4-4131-b485-69d699df4f0e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c48dc4407b5424dafcfc720fbc1d0b916236aadc82242cdc895ec6156be7f2,PodSandboxId:c70dd677d11fa597d6ec78222b711e79639b3938ad7d47bb86f3acbbdea63a5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726496060309122005,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed229b51306c92dd9accb501990f07f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2341c5103f3a7f29647d1871e30f4f764af2fea16ad0d761abd5df235ac593,PodSandboxId:f0ec96fa05021b468b9fa53034fe3a6b121b49539029a59cab7770d299c85ae2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726496060317999658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef3cfc34b5d3923d5c251e8166c360e2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b454d7bb255716c709ab1373da81c6a5f05e50514d5f91d32c0590f8413eba04,PodSandboxId:c470884ef1926ebb19164fb75f7de177126a80beb130e9d7d5534f3eb77a1413,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726496060242162281,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756c6d67643bb2ee70f40f961c02d740,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c82f6eb6f5f32910d140da76d9260949ee3401895d9edebe51c819564f920427,PodSandboxId:4baa3d76d236be2f97bd8c0ee80559a046e7c5263fada00baad316d4c6011978,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726496060234639826,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b5a13669d3298607e02d3516913a0e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f142b37f6ad4736ca2c31a48410fb0a0763cfb3e5f326abdb05f4d160d17137d,PodSandboxId:ca61839e3c0689bd7a36b88485c1f9710edfab094c7b0f910c9d04fd33f6c78b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726495738198748013,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-f9c5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 45b527f4-85bd-412f-ae54-bcce15672385,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038d0db591c9e5abc920c53e11e368e03ce9f5f56c252fe66d6adca7aecc76c7,PodSandboxId:ddc303bed82d694be4fdb59d47e89bf53a38cda5349e4c6ddf817f0d0bc6f0e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726495684302842989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qgmxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f4aec2-b2c0-4d56-8d4b-03aefe3855d1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:481d5f837d21d98e89bdf18bf25bb6d2f3c38cf20ef42ae7c231defcb5ab24e0,PodSandboxId:01cf246881f39153fbcdf2784b9c50c5a67d1281b40e6236d0d1e375223857b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726495684257143168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 131fe3a3-c839-45e9-af8b-eb2775d07571,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6237280bcbc8d08d158841602d786f89ad8b2507cbf2211ac22fbfedfd244a,PodSandboxId:020a5cb0db3168c3b25b11970fcb1c1ad28cf13d1b6b50dc2b548f9a98a77a11,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726495672528979913,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t6sh4,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 97f87f14-777b-4513-95d8-c8f12b26a6db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bbf062b56098221043af49349f3515a3514781797b5351608741e161512e0aa,PodSandboxId:23d0c9f0f0ead39e68d0053b8386167c02b312e7a2474465a4d48f17effb9502,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726495672364897137,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fz92k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef55133b-6cf4-4131-b485
-69d699df4f0e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e77a439b0e91f9361c7cea812056c652d036a67b2c7e9cf555850d1b1cc43c0,PodSandboxId:34c92d6e01422106edf0e9a26d493fb92c99d33a667e42c6ea7b220e6de1a1d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726495661966083067,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef3cfc34b5d3923d5c251e8166c360e2,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffe27a6ccf80fc83aa095c1981ef41d89878447fbeff8ce50858c52630c320ae,PodSandboxId:6d3d28ba2d9406cb9cb83c3bf908c731c255da93e54452d7502a7044d19e33dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726495661987657499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed229b51306c92dd9accb501990f07f,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70cdfc29b297091f9e9077b3d8748dc5e8b5154ad036d17d7e2e57fb6a90053a,PodSandboxId:f5f8f6ffee17612e8e16f3f376d0b36addedfe5531cd4cd5bd68f8157cfd394c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726495661985766604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756c6d67643bb2ee70f40f961c02d740,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d468e417dd8afa14df3147175ab51461c530334533dceb411e6decb152c690,PodSandboxId:f6a1100542c6686c11f8bcf8c484bec30f0b16f87a0529598cb2074670af80b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726495661919311623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b5a13669d3298607e02d3516913a0e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0136f058-5144-44f5-ae22-5480d037e177 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:18:26 multinode-561755 crio[2716]: time="2024-09-16 14:18:26.530761774Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=983f801c-866f-4d8f-b9a5-073b4ddd3e9e name=/runtime.v1.RuntimeService/Version
	Sep 16 14:18:26 multinode-561755 crio[2716]: time="2024-09-16 14:18:26.530834747Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=983f801c-866f-4d8f-b9a5-073b4ddd3e9e name=/runtime.v1.RuntimeService/Version
	Sep 16 14:18:26 multinode-561755 crio[2716]: time="2024-09-16 14:18:26.531949492Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9ccaecee-ff48-4333-8ff5-d968c991f1b0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:18:26 multinode-561755 crio[2716]: time="2024-09-16 14:18:26.532456488Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496306532434005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9ccaecee-ff48-4333-8ff5-d968c991f1b0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:18:26 multinode-561755 crio[2716]: time="2024-09-16 14:18:26.532962062Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df8b970c-69aa-4844-8538-c01bdc8aa460 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:18:26 multinode-561755 crio[2716]: time="2024-09-16 14:18:26.533016157Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df8b970c-69aa-4844-8538-c01bdc8aa460 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:18:26 multinode-561755 crio[2716]: time="2024-09-16 14:18:26.533403207Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7ecb82798b3b798b425f376ae69fceea56f0ff4ea945891f45ae8bdbe5d6159,PodSandboxId:edcf6a70f78f56150c7a606ed3ddba1d71be71bebe7a7272b781b0fcc0886c8f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726496098865377882,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-f9c5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 45b527f4-85bd-412f-ae54-bcce15672385,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58508acb748554e9375a653bdc562145ea2c3e24417df72dba1116bd07a16585,PodSandboxId:3758466f290e44a3a951ed6e050c24645b070ae2a9a92f7a19e85bbfb58b5f3d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726496065292622672,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t6sh4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97f87f14-777b-4513-95d8-c8f12b26a6db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b09e26e5b947e1ad4cc55a4d2eef52bc565981127aa468413be01645265181a,PodSandboxId:01046b7ed697e658c7b5337b9968ea7f890f54a7fd71b83d926bdc3acb1bd2e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726496065179644187,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 131fe3a3-c839-45e9-af8b-eb2775d07571,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adf4d60b2f20123ef96739e2427e8fdc83ffcbc541e89f974a0991aa5ef71cc4,PodSandboxId:923f430b712dcf0744d91386ed8b4b99be5d9d67bc6fa238665c123ca09273f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726496065197877158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qgmxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f4aec2-b2c0-4d56-8d4b-03aefe3855d1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{
\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6732202a9735ad240ad594daeba3c99acbd6041fb5330c5414718e5a2531b5eb,PodSandboxId:3d1f57513971b5f066cd08d5820a0b5e39de65af5978e46bd85adae9b1d12d1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726496065128155803,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fz92k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef55133b-6cf4-4131-b485-69d699df4f0e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c48dc4407b5424dafcfc720fbc1d0b916236aadc82242cdc895ec6156be7f2,PodSandboxId:c70dd677d11fa597d6ec78222b711e79639b3938ad7d47bb86f3acbbdea63a5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726496060309122005,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed229b51306c92dd9accb501990f07f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2341c5103f3a7f29647d1871e30f4f764af2fea16ad0d761abd5df235ac593,PodSandboxId:f0ec96fa05021b468b9fa53034fe3a6b121b49539029a59cab7770d299c85ae2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726496060317999658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef3cfc34b5d3923d5c251e8166c360e2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b454d7bb255716c709ab1373da81c6a5f05e50514d5f91d32c0590f8413eba04,PodSandboxId:c470884ef1926ebb19164fb75f7de177126a80beb130e9d7d5534f3eb77a1413,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726496060242162281,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756c6d67643bb2ee70f40f961c02d740,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c82f6eb6f5f32910d140da76d9260949ee3401895d9edebe51c819564f920427,PodSandboxId:4baa3d76d236be2f97bd8c0ee80559a046e7c5263fada00baad316d4c6011978,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726496060234639826,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b5a13669d3298607e02d3516913a0e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f142b37f6ad4736ca2c31a48410fb0a0763cfb3e5f326abdb05f4d160d17137d,PodSandboxId:ca61839e3c0689bd7a36b88485c1f9710edfab094c7b0f910c9d04fd33f6c78b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726495738198748013,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-f9c5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 45b527f4-85bd-412f-ae54-bcce15672385,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038d0db591c9e5abc920c53e11e368e03ce9f5f56c252fe66d6adca7aecc76c7,PodSandboxId:ddc303bed82d694be4fdb59d47e89bf53a38cda5349e4c6ddf817f0d0bc6f0e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726495684302842989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qgmxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f4aec2-b2c0-4d56-8d4b-03aefe3855d1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:481d5f837d21d98e89bdf18bf25bb6d2f3c38cf20ef42ae7c231defcb5ab24e0,PodSandboxId:01cf246881f39153fbcdf2784b9c50c5a67d1281b40e6236d0d1e375223857b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726495684257143168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 131fe3a3-c839-45e9-af8b-eb2775d07571,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6237280bcbc8d08d158841602d786f89ad8b2507cbf2211ac22fbfedfd244a,PodSandboxId:020a5cb0db3168c3b25b11970fcb1c1ad28cf13d1b6b50dc2b548f9a98a77a11,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726495672528979913,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t6sh4,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 97f87f14-777b-4513-95d8-c8f12b26a6db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bbf062b56098221043af49349f3515a3514781797b5351608741e161512e0aa,PodSandboxId:23d0c9f0f0ead39e68d0053b8386167c02b312e7a2474465a4d48f17effb9502,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726495672364897137,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fz92k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef55133b-6cf4-4131-b485
-69d699df4f0e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e77a439b0e91f9361c7cea812056c652d036a67b2c7e9cf555850d1b1cc43c0,PodSandboxId:34c92d6e01422106edf0e9a26d493fb92c99d33a667e42c6ea7b220e6de1a1d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726495661966083067,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef3cfc34b5d3923d5c251e8166c360e2,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffe27a6ccf80fc83aa095c1981ef41d89878447fbeff8ce50858c52630c320ae,PodSandboxId:6d3d28ba2d9406cb9cb83c3bf908c731c255da93e54452d7502a7044d19e33dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726495661987657499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed229b51306c92dd9accb501990f07f,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70cdfc29b297091f9e9077b3d8748dc5e8b5154ad036d17d7e2e57fb6a90053a,PodSandboxId:f5f8f6ffee17612e8e16f3f376d0b36addedfe5531cd4cd5bd68f8157cfd394c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726495661985766604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756c6d67643bb2ee70f40f961c02d740,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d468e417dd8afa14df3147175ab51461c530334533dceb411e6decb152c690,PodSandboxId:f6a1100542c6686c11f8bcf8c484bec30f0b16f87a0529598cb2074670af80b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726495661919311623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b5a13669d3298607e02d3516913a0e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df8b970c-69aa-4844-8538-c01bdc8aa460 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:18:26 multinode-561755 crio[2716]: time="2024-09-16 14:18:26.576178131Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=73ab0fd9-1850-4ed6-a57c-5a28f2387099 name=/runtime.v1.RuntimeService/Version
	Sep 16 14:18:26 multinode-561755 crio[2716]: time="2024-09-16 14:18:26.576320657Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=73ab0fd9-1850-4ed6-a57c-5a28f2387099 name=/runtime.v1.RuntimeService/Version
	Sep 16 14:18:26 multinode-561755 crio[2716]: time="2024-09-16 14:18:26.577712163Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6fc29a0b-e245-4516-8065-73571faa6456 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:18:26 multinode-561755 crio[2716]: time="2024-09-16 14:18:26.578117124Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496306578087052,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6fc29a0b-e245-4516-8065-73571faa6456 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:18:26 multinode-561755 crio[2716]: time="2024-09-16 14:18:26.578613872Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87131561-3d27-4f53-b38e-15c51f6c9688 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:18:26 multinode-561755 crio[2716]: time="2024-09-16 14:18:26.578687414Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87131561-3d27-4f53-b38e-15c51f6c9688 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:18:26 multinode-561755 crio[2716]: time="2024-09-16 14:18:26.579019513Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7ecb82798b3b798b425f376ae69fceea56f0ff4ea945891f45ae8bdbe5d6159,PodSandboxId:edcf6a70f78f56150c7a606ed3ddba1d71be71bebe7a7272b781b0fcc0886c8f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726496098865377882,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-f9c5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 45b527f4-85bd-412f-ae54-bcce15672385,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58508acb748554e9375a653bdc562145ea2c3e24417df72dba1116bd07a16585,PodSandboxId:3758466f290e44a3a951ed6e050c24645b070ae2a9a92f7a19e85bbfb58b5f3d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726496065292622672,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t6sh4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97f87f14-777b-4513-95d8-c8f12b26a6db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b09e26e5b947e1ad4cc55a4d2eef52bc565981127aa468413be01645265181a,PodSandboxId:01046b7ed697e658c7b5337b9968ea7f890f54a7fd71b83d926bdc3acb1bd2e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726496065179644187,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 131fe3a3-c839-45e9-af8b-eb2775d07571,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adf4d60b2f20123ef96739e2427e8fdc83ffcbc541e89f974a0991aa5ef71cc4,PodSandboxId:923f430b712dcf0744d91386ed8b4b99be5d9d67bc6fa238665c123ca09273f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726496065197877158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qgmxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f4aec2-b2c0-4d56-8d4b-03aefe3855d1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{
\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6732202a9735ad240ad594daeba3c99acbd6041fb5330c5414718e5a2531b5eb,PodSandboxId:3d1f57513971b5f066cd08d5820a0b5e39de65af5978e46bd85adae9b1d12d1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726496065128155803,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fz92k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef55133b-6cf4-4131-b485-69d699df4f0e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c48dc4407b5424dafcfc720fbc1d0b916236aadc82242cdc895ec6156be7f2,PodSandboxId:c70dd677d11fa597d6ec78222b711e79639b3938ad7d47bb86f3acbbdea63a5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726496060309122005,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed229b51306c92dd9accb501990f07f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2341c5103f3a7f29647d1871e30f4f764af2fea16ad0d761abd5df235ac593,PodSandboxId:f0ec96fa05021b468b9fa53034fe3a6b121b49539029a59cab7770d299c85ae2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726496060317999658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef3cfc34b5d3923d5c251e8166c360e2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b454d7bb255716c709ab1373da81c6a5f05e50514d5f91d32c0590f8413eba04,PodSandboxId:c470884ef1926ebb19164fb75f7de177126a80beb130e9d7d5534f3eb77a1413,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726496060242162281,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756c6d67643bb2ee70f40f961c02d740,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c82f6eb6f5f32910d140da76d9260949ee3401895d9edebe51c819564f920427,PodSandboxId:4baa3d76d236be2f97bd8c0ee80559a046e7c5263fada00baad316d4c6011978,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726496060234639826,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b5a13669d3298607e02d3516913a0e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f142b37f6ad4736ca2c31a48410fb0a0763cfb3e5f326abdb05f4d160d17137d,PodSandboxId:ca61839e3c0689bd7a36b88485c1f9710edfab094c7b0f910c9d04fd33f6c78b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726495738198748013,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-f9c5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 45b527f4-85bd-412f-ae54-bcce15672385,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038d0db591c9e5abc920c53e11e368e03ce9f5f56c252fe66d6adca7aecc76c7,PodSandboxId:ddc303bed82d694be4fdb59d47e89bf53a38cda5349e4c6ddf817f0d0bc6f0e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726495684302842989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qgmxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f4aec2-b2c0-4d56-8d4b-03aefe3855d1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:481d5f837d21d98e89bdf18bf25bb6d2f3c38cf20ef42ae7c231defcb5ab24e0,PodSandboxId:01cf246881f39153fbcdf2784b9c50c5a67d1281b40e6236d0d1e375223857b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726495684257143168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 131fe3a3-c839-45e9-af8b-eb2775d07571,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6237280bcbc8d08d158841602d786f89ad8b2507cbf2211ac22fbfedfd244a,PodSandboxId:020a5cb0db3168c3b25b11970fcb1c1ad28cf13d1b6b50dc2b548f9a98a77a11,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726495672528979913,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t6sh4,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 97f87f14-777b-4513-95d8-c8f12b26a6db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bbf062b56098221043af49349f3515a3514781797b5351608741e161512e0aa,PodSandboxId:23d0c9f0f0ead39e68d0053b8386167c02b312e7a2474465a4d48f17effb9502,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726495672364897137,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fz92k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef55133b-6cf4-4131-b485
-69d699df4f0e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e77a439b0e91f9361c7cea812056c652d036a67b2c7e9cf555850d1b1cc43c0,PodSandboxId:34c92d6e01422106edf0e9a26d493fb92c99d33a667e42c6ea7b220e6de1a1d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726495661966083067,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef3cfc34b5d3923d5c251e8166c360e2,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffe27a6ccf80fc83aa095c1981ef41d89878447fbeff8ce50858c52630c320ae,PodSandboxId:6d3d28ba2d9406cb9cb83c3bf908c731c255da93e54452d7502a7044d19e33dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726495661987657499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed229b51306c92dd9accb501990f07f,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70cdfc29b297091f9e9077b3d8748dc5e8b5154ad036d17d7e2e57fb6a90053a,PodSandboxId:f5f8f6ffee17612e8e16f3f376d0b36addedfe5531cd4cd5bd68f8157cfd394c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726495661985766604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756c6d67643bb2ee70f40f961c02d740,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d468e417dd8afa14df3147175ab51461c530334533dceb411e6decb152c690,PodSandboxId:f6a1100542c6686c11f8bcf8c484bec30f0b16f87a0529598cb2074670af80b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726495661919311623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-561755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b5a13669d3298607e02d3516913a0e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87131561-3d27-4f53-b38e-15c51f6c9688 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f7ecb82798b3b       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   edcf6a70f78f5       busybox-7dff88458-f9c5w
	58508acb74855       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   3758466f290e4       kindnet-t6sh4
	adf4d60b2f201       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   1                   923f430b712dc       coredns-7c65d6cfc9-qgmxs
	7b09e26e5b947       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   01046b7ed697e       storage-provisioner
	6732202a9735a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   3d1f57513971b       kube-proxy-fz92k
	3d2341c5103f3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   f0ec96fa05021       etcd-multinode-561755
	32c48dc4407b5       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   c70dd677d11fa       kube-scheduler-multinode-561755
	b454d7bb25571       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            1                   c470884ef1926       kube-apiserver-multinode-561755
	c82f6eb6f5f32       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   1                   4baa3d76d236b       kube-controller-manager-multinode-561755
	f142b37f6ad47       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   ca61839e3c068       busybox-7dff88458-f9c5w
	038d0db591c9e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      10 minutes ago      Exited              coredns                   0                   ddc303bed82d6       coredns-7c65d6cfc9-qgmxs
	481d5f837d21d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   01cf246881f39       storage-provisioner
	ad6237280bcbc       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      10 minutes ago      Exited              kindnet-cni               0                   020a5cb0db316       kindnet-t6sh4
	9bbf062b56098       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      10 minutes ago      Exited              kube-proxy                0                   23d0c9f0f0ead       kube-proxy-fz92k
	ffe27a6ccf80f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      10 minutes ago      Exited              kube-scheduler            0                   6d3d28ba2d940       kube-scheduler-multinode-561755
	70cdfc29b2970       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      10 minutes ago      Exited              kube-apiserver            0                   f5f8f6ffee176       kube-apiserver-multinode-561755
	3e77a439b0e91       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   34c92d6e01422       etcd-multinode-561755
	b4d468e417dd8       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      10 minutes ago      Exited              kube-controller-manager   0                   f6a1100542c66       kube-controller-manager-multinode-561755
	
	
	==> coredns [038d0db591c9e5abc920c53e11e368e03ce9f5f56c252fe66d6adca7aecc76c7] <==
	[INFO] 10.244.1.2:38653 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001613273s
	[INFO] 10.244.1.2:52874 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000146113s
	[INFO] 10.244.1.2:32874 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077559s
	[INFO] 10.244.1.2:57140 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001054421s
	[INFO] 10.244.1.2:34864 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072288s
	[INFO] 10.244.1.2:32985 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006605s
	[INFO] 10.244.1.2:54940 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062877s
	[INFO] 10.244.0.3:38082 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137423s
	[INFO] 10.244.0.3:40392 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000053684s
	[INFO] 10.244.0.3:39986 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105196s
	[INFO] 10.244.0.3:43189 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000036357s
	[INFO] 10.244.1.2:32802 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014118s
	[INFO] 10.244.1.2:46476 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000121807s
	[INFO] 10.244.1.2:46921 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097849s
	[INFO] 10.244.1.2:46714 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121275s
	[INFO] 10.244.0.3:57079 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132786s
	[INFO] 10.244.0.3:49020 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000170154s
	[INFO] 10.244.0.3:60501 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000125852s
	[INFO] 10.244.0.3:48526 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120325s
	[INFO] 10.244.1.2:33299 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146427s
	[INFO] 10.244.1.2:43843 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000110016s
	[INFO] 10.244.1.2:49995 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000104189s
	[INFO] 10.244.1.2:54004 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000092229s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [adf4d60b2f20123ef96739e2427e8fdc83ffcbc541e89f974a0991aa5ef71cc4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37252 - 21208 "HINFO IN 2766008737970293421.9180525390571247957. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010463352s
	
	
	==> describe nodes <==
	Name:               multinode-561755
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-561755
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86
	                    minikube.k8s.io/name=multinode-561755
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T14_07_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 14:07:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-561755
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 14:18:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 14:14:23 +0000   Mon, 16 Sep 2024 14:07:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 14:14:23 +0000   Mon, 16 Sep 2024 14:07:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 14:14:23 +0000   Mon, 16 Sep 2024 14:07:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 14:14:23 +0000   Mon, 16 Sep 2024 14:08:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.163
	  Hostname:    multinode-561755
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 abfdd7c763814fb7a99004bb6a18a7f4
	  System UUID:                abfdd7c7-6381-4fb7-a990-04bb6a18a7f4
	  Boot ID:                    d00a5a85-3106-449c-943b-e325316e5e8d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-f9c5w                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m31s
	  kube-system                 coredns-7c65d6cfc9-qgmxs                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-561755                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-t6sh4                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-561755             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-561755    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-fz92k                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-561755             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)    kubelet          Node multinode-561755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)    kubelet          Node multinode-561755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)    kubelet          Node multinode-561755 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-561755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-561755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-561755 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-561755 event: Registered Node multinode-561755 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-561755 status is now: NodeReady
	  Normal  Starting                 4m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m7s (x8 over 4m7s)  kubelet          Node multinode-561755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s (x8 over 4m7s)  kubelet          Node multinode-561755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s (x7 over 4m7s)  kubelet          Node multinode-561755 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m59s                node-controller  Node multinode-561755 event: Registered Node multinode-561755 in Controller
	
	
	Name:               multinode-561755-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-561755-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86
	                    minikube.k8s.io/name=multinode-561755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T14_15_06_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 14:15:05 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-561755-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 14:15:57 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 16 Sep 2024 14:15:36 +0000   Mon, 16 Sep 2024 14:16:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 16 Sep 2024 14:15:36 +0000   Mon, 16 Sep 2024 14:16:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 16 Sep 2024 14:15:36 +0000   Mon, 16 Sep 2024 14:16:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 16 Sep 2024 14:15:36 +0000   Mon, 16 Sep 2024 14:16:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.34
	  Hostname:    multinode-561755-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a3d12e8cd78542e197df8ad303b2b9a0
	  System UUID:                a3d12e8c-d785-42e1-97df-8ad303b2b9a0
	  Boot ID:                    6181806d-5667-41f5-9bf7-9bb25344fc91
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cwk54    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m25s
	  kube-system                 kindnet-8qqj5              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m51s
	  kube-system                 kube-proxy-dgsnj           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m16s                  kube-proxy       
	  Normal  Starting                 9m46s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m52s (x2 over 9m52s)  kubelet          Node multinode-561755-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m52s (x2 over 9m52s)  kubelet          Node multinode-561755-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m52s (x2 over 9m52s)  kubelet          Node multinode-561755-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m33s                  kubelet          Node multinode-561755-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m21s (x2 over 3m21s)  kubelet          Node multinode-561755-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m21s (x2 over 3m21s)  kubelet          Node multinode-561755-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m21s (x2 over 3m21s)  kubelet          Node multinode-561755-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m3s                   kubelet          Node multinode-561755-m02 status is now: NodeReady
	  Normal  NodeNotReady             104s                   node-controller  Node multinode-561755-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.056960] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063741] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.173445] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.128851] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.285816] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.801844] systemd-fstab-generator[749]: Ignoring "noauto" option for root device
	[  +4.283668] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.054383] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.990944] systemd-fstab-generator[1220]: Ignoring "noauto" option for root device
	[  +0.075184] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.611490] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.858125] kauditd_printk_skb: 46 callbacks suppressed
	[Sep16 14:08] kauditd_printk_skb: 41 callbacks suppressed
	[ +52.002975] kauditd_printk_skb: 12 callbacks suppressed
	[Sep16 14:14] systemd-fstab-generator[2639]: Ignoring "noauto" option for root device
	[  +0.137606] systemd-fstab-generator[2651]: Ignoring "noauto" option for root device
	[  +0.162607] systemd-fstab-generator[2665]: Ignoring "noauto" option for root device
	[  +0.137343] systemd-fstab-generator[2677]: Ignoring "noauto" option for root device
	[  +0.260613] systemd-fstab-generator[2705]: Ignoring "noauto" option for root device
	[  +0.638377] systemd-fstab-generator[2803]: Ignoring "noauto" option for root device
	[  +1.803436] systemd-fstab-generator[2926]: Ignoring "noauto" option for root device
	[  +5.705069] kauditd_printk_skb: 184 callbacks suppressed
	[  +9.632179] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.493793] systemd-fstab-generator[3758]: Ignoring "noauto" option for root device
	[ +17.636895] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [3d2341c5103f3a7f29647d1871e30f4f764af2fea16ad0d761abd5df235ac593] <==
	{"level":"info","ts":"2024-09-16T14:14:20.880858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a8a86752a40bcef4 switched to configuration voters=(12153077199096499956)"}
	{"level":"info","ts":"2024-09-16T14:14:20.887453Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e373eafcd5903e51","local-member-id":"a8a86752a40bcef4","added-peer-id":"a8a86752a40bcef4","added-peer-peer-urls":["https://192.168.39.163:2380"]}
	{"level":"info","ts":"2024-09-16T14:14:20.887617Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e373eafcd5903e51","local-member-id":"a8a86752a40bcef4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T14:14:20.887668Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T14:14:20.889024Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T14:14:20.891515Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"a8a86752a40bcef4","initial-advertise-peer-urls":["https://192.168.39.163:2380"],"listen-peer-urls":["https://192.168.39.163:2380"],"advertise-client-urls":["https://192.168.39.163:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.163:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T14:14:20.893261Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T14:14:20.893462Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.163:2380"}
	{"level":"info","ts":"2024-09-16T14:14:20.893492Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.163:2380"}
	{"level":"info","ts":"2024-09-16T14:14:22.294157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a8a86752a40bcef4 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T14:14:22.294260Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a8a86752a40bcef4 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T14:14:22.294303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a8a86752a40bcef4 received MsgPreVoteResp from a8a86752a40bcef4 at term 2"}
	{"level":"info","ts":"2024-09-16T14:14:22.294318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a8a86752a40bcef4 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T14:14:22.294323Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a8a86752a40bcef4 received MsgVoteResp from a8a86752a40bcef4 at term 3"}
	{"level":"info","ts":"2024-09-16T14:14:22.294331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a8a86752a40bcef4 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T14:14:22.294339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a8a86752a40bcef4 elected leader a8a86752a40bcef4 at term 3"}
	{"level":"info","ts":"2024-09-16T14:14:22.298920Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T14:14:22.299317Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T14:14:22.298911Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"a8a86752a40bcef4","local-member-attributes":"{Name:multinode-561755 ClientURLs:[https://192.168.39.163:2379]}","request-path":"/0/members/a8a86752a40bcef4/attributes","cluster-id":"e373eafcd5903e51","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T14:14:22.299824Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T14:14:22.299920Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T14:14:22.300082Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T14:14:22.300949Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T14:14:22.301928Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.163:2379"}
	{"level":"info","ts":"2024-09-16T14:14:22.300970Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [3e77a439b0e91f9361c7cea812056c652d036a67b2c7e9cf555850d1b1cc43c0] <==
	{"level":"info","ts":"2024-09-16T14:08:34.886634Z","caller":"traceutil/trace.go:171","msg":"trace[1736031726] transaction","detail":"{read_only:false; response_revision:469; number_of_response:1; }","duration":"241.928727ms","start":"2024-09-16T14:08:34.644687Z","end":"2024-09-16T14:08:34.886616Z","steps":["trace[1736031726] 'process raft request'  (duration: 236.954146ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T14:09:34.526798Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.642609ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14912704774043398584 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-561755-m03.17f5bec2657cec39\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-561755-m03.17f5bec2657cec39\" value_size:642 lease:5689332737188622451 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-16T14:09:34.526927Z","caller":"traceutil/trace.go:171","msg":"trace[634631698] linearizableReadLoop","detail":"{readStateIndex:644; appliedIndex:643; }","duration":"152.460443ms","start":"2024-09-16T14:09:34.374451Z","end":"2024-09-16T14:09:34.526912Z","steps":["trace[634631698] 'read index received'  (duration: 28.484µs)","trace[634631698] 'applied index is now lower than readState.Index'  (duration: 152.431265ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T14:09:34.527033Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.569603ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-561755-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T14:09:34.527073Z","caller":"traceutil/trace.go:171","msg":"trace[2116339976] range","detail":"{range_begin:/registry/minions/multinode-561755-m03; range_end:; response_count:0; response_revision:610; }","duration":"152.621541ms","start":"2024-09-16T14:09:34.374446Z","end":"2024-09-16T14:09:34.527068Z","steps":["trace[2116339976] 'agreement among raft nodes before linearized reading'  (duration: 152.512122ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T14:09:34.527107Z","caller":"traceutil/trace.go:171","msg":"trace[1146783470] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"204.229038ms","start":"2024-09-16T14:09:34.322800Z","end":"2024-09-16T14:09:34.527029Z","steps":["trace[1146783470] 'compare'  (duration: 199.513462ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T14:09:35.478034Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.715539ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-561755-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T14:09:35.478104Z","caller":"traceutil/trace.go:171","msg":"trace[317187513] range","detail":"{range_begin:/registry/csinodes/multinode-561755-m03; range_end:; response_count:0; response_revision:631; }","duration":"168.792037ms","start":"2024-09-16T14:09:35.309298Z","end":"2024-09-16T14:09:35.478090Z","steps":["trace[317187513] 'range keys from in-memory index tree'  (duration: 168.580181ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T14:09:35.660363Z","caller":"traceutil/trace.go:171","msg":"trace[1249465066] linearizableReadLoop","detail":"{readStateIndex:667; appliedIndex:666; }","duration":"128.147058ms","start":"2024-09-16T14:09:35.532203Z","end":"2024-09-16T14:09:35.660350Z","steps":["trace[1249465066] 'read index received'  (duration: 127.949608ms)","trace[1249465066] 'applied index is now lower than readState.Index'  (duration: 196.984µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T14:09:35.660521Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.317842ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-561755-m03\" ","response":"range_response_count:1 size:2824"}
	{"level":"info","ts":"2024-09-16T14:09:35.660551Z","caller":"traceutil/trace.go:171","msg":"trace[1442177822] range","detail":"{range_begin:/registry/minions/multinode-561755-m03; range_end:; response_count:1; response_revision:632; }","duration":"128.360965ms","start":"2024-09-16T14:09:35.532183Z","end":"2024-09-16T14:09:35.660544Z","steps":["trace[1442177822] 'agreement among raft nodes before linearized reading'  (duration: 128.233904ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T14:09:35.660752Z","caller":"traceutil/trace.go:171","msg":"trace[1289672176] transaction","detail":"{read_only:false; response_revision:632; number_of_response:1; }","duration":"177.587061ms","start":"2024-09-16T14:09:35.483157Z","end":"2024-09-16T14:09:35.660744Z","steps":["trace[1289672176] 'process raft request'  (duration: 177.032795ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T14:09:35.945139Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"243.140294ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T14:09:35.945323Z","caller":"traceutil/trace.go:171","msg":"trace[802709646] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:632; }","duration":"243.331255ms","start":"2024-09-16T14:09:35.701978Z","end":"2024-09-16T14:09:35.945310Z","steps":["trace[802709646] 'range keys from in-memory index tree'  (duration: 243.129774ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T14:12:45.060139Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T14:12:45.066174Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-561755","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.163:2380"],"advertise-client-urls":["https://192.168.39.163:2379"]}
	{"level":"warn","ts":"2024-09-16T14:12:45.070307Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T14:12:45.070437Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/09/16 14:12:45 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-16T14:12:45.147142Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.163:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T14:12:45.147217Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.163:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T14:12:45.147361Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"a8a86752a40bcef4","current-leader-member-id":"a8a86752a40bcef4"}
	{"level":"info","ts":"2024-09-16T14:12:45.149951Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.163:2380"}
	{"level":"info","ts":"2024-09-16T14:12:45.150101Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.163:2380"}
	{"level":"info","ts":"2024-09-16T14:12:45.150137Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-561755","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.163:2380"],"advertise-client-urls":["https://192.168.39.163:2379"]}
	
	
	==> kernel <==
	 14:18:27 up 11 min,  0 users,  load average: 0.15, 0.17, 0.12
	Linux multinode-561755 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [58508acb748554e9375a653bdc562145ea2c3e24417df72dba1116bd07a16585] <==
	I0916 14:17:26.313382       1 main.go:322] Node multinode-561755-m02 has CIDR [10.244.1.0/24] 
	I0916 14:17:36.319963       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0916 14:17:36.320028       1 main.go:299] handling current node
	I0916 14:17:36.320045       1 main.go:295] Handling node with IPs: map[192.168.39.34:{}]
	I0916 14:17:36.320050       1 main.go:322] Node multinode-561755-m02 has CIDR [10.244.1.0/24] 
	I0916 14:17:46.318023       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0916 14:17:46.318202       1 main.go:299] handling current node
	I0916 14:17:46.318277       1 main.go:295] Handling node with IPs: map[192.168.39.34:{}]
	I0916 14:17:46.318302       1 main.go:322] Node multinode-561755-m02 has CIDR [10.244.1.0/24] 
	I0916 14:17:56.312750       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0916 14:17:56.312871       1 main.go:299] handling current node
	I0916 14:17:56.312896       1 main.go:295] Handling node with IPs: map[192.168.39.34:{}]
	I0916 14:17:56.312903       1 main.go:322] Node multinode-561755-m02 has CIDR [10.244.1.0/24] 
	I0916 14:18:06.316584       1 main.go:295] Handling node with IPs: map[192.168.39.34:{}]
	I0916 14:18:06.316639       1 main.go:322] Node multinode-561755-m02 has CIDR [10.244.1.0/24] 
	I0916 14:18:06.316772       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0916 14:18:06.316797       1 main.go:299] handling current node
	I0916 14:18:16.320349       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0916 14:18:16.320430       1 main.go:299] handling current node
	I0916 14:18:16.320454       1 main.go:295] Handling node with IPs: map[192.168.39.34:{}]
	I0916 14:18:16.320459       1 main.go:322] Node multinode-561755-m02 has CIDR [10.244.1.0/24] 
	I0916 14:18:26.313481       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0916 14:18:26.313562       1 main.go:299] handling current node
	I0916 14:18:26.313599       1 main.go:295] Handling node with IPs: map[192.168.39.34:{}]
	I0916 14:18:26.313608       1 main.go:322] Node multinode-561755-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [ad6237280bcbc8d08d158841602d786f89ad8b2507cbf2211ac22fbfedfd244a] <==
	I0916 14:12:03.525625       1 main.go:322] Node multinode-561755-m03 has CIDR [10.244.3.0/24] 
	I0916 14:12:13.526363       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0916 14:12:13.526493       1 main.go:299] handling current node
	I0916 14:12:13.526529       1 main.go:295] Handling node with IPs: map[192.168.39.34:{}]
	I0916 14:12:13.526549       1 main.go:322] Node multinode-561755-m02 has CIDR [10.244.1.0/24] 
	I0916 14:12:13.526690       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0916 14:12:13.526716       1 main.go:322] Node multinode-561755-m03 has CIDR [10.244.3.0/24] 
	I0916 14:12:23.525378       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0916 14:12:23.525478       1 main.go:299] handling current node
	I0916 14:12:23.525525       1 main.go:295] Handling node with IPs: map[192.168.39.34:{}]
	I0916 14:12:23.525535       1 main.go:322] Node multinode-561755-m02 has CIDR [10.244.1.0/24] 
	I0916 14:12:23.525718       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0916 14:12:23.525750       1 main.go:322] Node multinode-561755-m03 has CIDR [10.244.3.0/24] 
	I0916 14:12:33.516661       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0916 14:12:33.516713       1 main.go:299] handling current node
	I0916 14:12:33.516727       1 main.go:295] Handling node with IPs: map[192.168.39.34:{}]
	I0916 14:12:33.516734       1 main.go:322] Node multinode-561755-m02 has CIDR [10.244.1.0/24] 
	I0916 14:12:33.516880       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0916 14:12:33.516904       1 main.go:322] Node multinode-561755-m03 has CIDR [10.244.3.0/24] 
	I0916 14:12:43.516678       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0916 14:12:43.516773       1 main.go:322] Node multinode-561755-m03 has CIDR [10.244.3.0/24] 
	I0916 14:12:43.516915       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0916 14:12:43.516949       1 main.go:299] handling current node
	I0916 14:12:43.517014       1 main.go:295] Handling node with IPs: map[192.168.39.34:{}]
	I0916 14:12:43.517037       1 main.go:322] Node multinode-561755-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [70cdfc29b297091f9e9077b3d8748dc5e8b5154ad036d17d7e2e57fb6a90053a] <==
	E0916 14:09:01.158030       1 conn.go:339] Error on socket receive: read tcp 192.168.39.163:8443->192.168.39.1:59556: use of closed network connection
	E0916 14:09:01.323588       1 conn.go:339] Error on socket receive: read tcp 192.168.39.163:8443->192.168.39.1:59568: use of closed network connection
	I0916 14:12:45.060474       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0916 14:12:45.080591       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0916 14:12:45.080849       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 14:12:45.081057       1 storage_flowcontrol.go:186] APF bootstrap ensurer is exiting
	I0916 14:12:45.081166       1 cluster_authentication_trust_controller.go:466] Shutting down cluster_authentication_trust_controller controller
	I0916 14:12:45.082666       1 apiapproval_controller.go:201] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0916 14:12:45.082791       1 nonstructuralschema_controller.go:207] Shutting down NonStructuralSchemaConditionController
	I0916 14:12:45.082823       1 establishing_controller.go:92] Shutting down EstablishingController
	I0916 14:12:45.082839       1 naming_controller.go:305] Shutting down NamingConditionController
	I0916 14:12:45.082854       1 controller.go:120] Shutting down OpenAPI V3 controller
	I0916 14:12:45.082868       1 controller.go:170] Shutting down OpenAPI controller
	I0916 14:12:45.082879       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I0916 14:12:45.082894       1 autoregister_controller.go:168] Shutting down autoregister controller
	I0916 14:12:45.082913       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0916 14:12:45.082929       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I0916 14:12:45.082936       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I0916 14:12:45.082954       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0916 14:12:45.082963       1 local_available_controller.go:172] Shutting down LocalAvailability controller
	I0916 14:12:45.082975       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I0916 14:12:45.082982       1 apiservice_controller.go:134] Shutting down APIServiceRegistrationController
	I0916 14:12:45.082990       1 remote_available_controller.go:427] Shutting down RemoteAvailability controller
	I0916 14:12:45.083001       1 controller.go:132] Ending legacy_token_tracking_controller
	I0916 14:12:45.083006       1 controller.go:133] Shutting down legacy_token_tracking_controller
	
	
	==> kube-apiserver [b454d7bb255716c709ab1373da81c6a5f05e50514d5f91d32c0590f8413eba04] <==
	I0916 14:14:23.610434       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 14:14:23.612772       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 14:14:23.613512       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 14:14:23.619541       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 14:14:23.633340       1 aggregator.go:171] initial CRD sync complete...
	I0916 14:14:23.633370       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 14:14:23.633381       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 14:14:23.633386       1 cache.go:39] Caches are synced for autoregister controller
	I0916 14:14:23.637189       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 14:14:23.637316       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 14:14:23.637448       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 14:14:23.637588       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 14:14:23.643758       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 14:14:23.643819       1 policy_source.go:224] refreshing policies
	I0916 14:14:23.647653       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 14:14:23.648914       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 14:14:23.683910       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 14:14:24.486091       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 14:14:25.999977       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 14:14:26.103716       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 14:14:26.117481       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 14:14:26.181401       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 14:14:26.186977       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 14:14:26.982338       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 14:14:27.374563       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b4d468e417dd8afa14df3147175ab51461c530334533dceb411e6decb152c690] <==
	I0916 14:10:21.712832       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-561755-m02"
	I0916 14:10:21.715426       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-561755-m03\" does not exist"
	I0916 14:10:21.731680       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-561755-m03" podCIDRs=["10.244.3.0/24"]
	I0916 14:10:21.731730       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	E0916 14:10:21.753670       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-561755-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-561755-m03" podCIDRs=["10.244.4.0/24"]
	E0916 14:10:21.753734       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-561755-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-561755-m03"
	E0916 14:10:21.753836       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-561755-m03': failed to patch node CIDR: Node \"multinode-561755-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0916 14:10:21.753876       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:10:21.759129       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:10:22.086195       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:10:26.133784       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:10:32.140060       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:10:39.484922       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-561755-m02"
	I0916 14:10:39.485290       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:10:39.497923       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:10:41.117862       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:11:21.133424       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m02"
	I0916 14:11:21.134511       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-561755-m03"
	I0916 14:11:21.149901       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m02"
	I0916 14:11:21.179902       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.458482ms"
	I0916 14:11:21.180473       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.523µs"
	I0916 14:11:26.181857       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:11:26.196407       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:11:26.227101       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m02"
	I0916 14:11:36.308273       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	
	
	==> kube-controller-manager [c82f6eb6f5f32910d140da76d9260949ee3401895d9edebe51c819564f920427] <==
	I0916 14:15:42.189648       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-561755-m03" podCIDRs=["10.244.2.0/24"]
	I0916 14:15:42.189714       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:15:42.189936       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:15:42.196173       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:15:42.299561       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:15:42.661694       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:15:47.220678       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:15:52.351181       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:16:00.092957       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-561755-m02"
	I0916 14:16:00.093285       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:16:00.106886       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:16:02.142127       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:16:04.710145       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:16:04.724625       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:16:05.277918       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-561755-m02"
	I0916 14:16:05.278019       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m03"
	I0916 14:16:42.160827       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m02"
	I0916 14:16:42.175169       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m02"
	I0916 14:16:42.178177       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.227259ms"
	I0916 14:16:42.179056       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.99µs"
	I0916 14:16:47.314409       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-561755-m02"
	I0916 14:17:06.963723       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-kd8nx"
	I0916 14:17:06.984795       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-kd8nx"
	I0916 14:17:06.984847       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-mc7zk"
	I0916 14:17:07.037083       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-mc7zk"
	
	
	==> kube-proxy [6732202a9735ad240ad594daeba3c99acbd6041fb5330c5414718e5a2531b5eb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 14:14:25.598401       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 14:14:25.619833       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.163"]
	E0916 14:14:25.619925       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 14:14:25.741170       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 14:14:25.741537       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 14:14:25.741942       1 server_linux.go:169] "Using iptables Proxier"
	I0916 14:14:25.752459       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 14:14:25.753101       1 server.go:483] "Version info" version="v1.31.1"
	I0916 14:14:25.753131       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 14:14:25.754785       1 config.go:199] "Starting service config controller"
	I0916 14:14:25.754862       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 14:14:25.754917       1 config.go:105] "Starting endpoint slice config controller"
	I0916 14:14:25.754922       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 14:14:25.755859       1 config.go:328] "Starting node config controller"
	I0916 14:14:25.755893       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 14:14:25.856911       1 shared_informer.go:320] Caches are synced for service config
	I0916 14:14:25.857019       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 14:14:25.858366       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [9bbf062b56098221043af49349f3515a3514781797b5351608741e161512e0aa] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 14:07:52.580898       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 14:07:52.593571       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.163"]
	E0916 14:07:52.593690       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 14:07:52.742051       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 14:07:52.744316       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 14:07:52.744458       1 server_linux.go:169] "Using iptables Proxier"
	I0916 14:07:52.747007       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 14:07:52.747530       1 server.go:483] "Version info" version="v1.31.1"
	I0916 14:07:52.747592       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 14:07:52.750947       1 config.go:199] "Starting service config controller"
	I0916 14:07:52.751053       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 14:07:52.751529       1 config.go:105] "Starting endpoint slice config controller"
	I0916 14:07:52.751560       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 14:07:52.752751       1 config.go:328] "Starting node config controller"
	I0916 14:07:52.752784       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 14:07:52.852408       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 14:07:52.852469       1 shared_informer.go:320] Caches are synced for service config
	I0916 14:07:52.853015       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [32c48dc4407b5424dafcfc720fbc1d0b916236aadc82242cdc895ec6156be7f2] <==
	I0916 14:14:21.392964       1 serving.go:386] Generated self-signed cert in-memory
	I0916 14:14:23.678798       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 14:14:23.678845       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 14:14:23.686314       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0916 14:14:23.686375       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0916 14:14:23.686483       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 14:14:23.686510       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 14:14:23.686523       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0916 14:14:23.686531       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0916 14:14:23.687102       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 14:14:23.688583       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 14:14:23.786987       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0916 14:14:23.787065       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0916 14:14:23.787081       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ffe27a6ccf80fc83aa095c1981ef41d89878447fbeff8ce50858c52630c320ae] <==
	E0916 14:07:44.455825       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 14:07:44.455889       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 14:07:44.455940       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 14:07:44.455897       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 14:07:44.456063       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 14:07:44.456996       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 14:07:44.457040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 14:07:44.460296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 14:07:44.460332       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 14:07:45.408547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 14:07:45.408598       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 14:07:45.420776       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 14:07:45.420824       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 14:07:45.454597       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 14:07:45.454641       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 14:07:45.479006       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 14:07:45.479048       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 14:07:45.504890       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 14:07:45.504947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 14:07:45.565951       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 14:07:45.566072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 14:07:45.691526       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 14:07:45.691700       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 14:07:48.838202       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 14:12:45.059361       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 16 14:17:09 multinode-561755 kubelet[2933]: E0916 14:17:09.676076    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496229675584687,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:17:19 multinode-561755 kubelet[2933]: E0916 14:17:19.631533    2933 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 14:17:19 multinode-561755 kubelet[2933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 14:17:19 multinode-561755 kubelet[2933]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 14:17:19 multinode-561755 kubelet[2933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 14:17:19 multinode-561755 kubelet[2933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 14:17:19 multinode-561755 kubelet[2933]: E0916 14:17:19.677282    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496239677041732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:17:19 multinode-561755 kubelet[2933]: E0916 14:17:19.677329    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496239677041732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:17:29 multinode-561755 kubelet[2933]: E0916 14:17:29.678983    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496249678741343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:17:29 multinode-561755 kubelet[2933]: E0916 14:17:29.679029    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496249678741343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:17:39 multinode-561755 kubelet[2933]: E0916 14:17:39.680743    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496259680390676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:17:39 multinode-561755 kubelet[2933]: E0916 14:17:39.681091    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496259680390676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:17:49 multinode-561755 kubelet[2933]: E0916 14:17:49.683527    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496269683175702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:17:49 multinode-561755 kubelet[2933]: E0916 14:17:49.683573    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496269683175702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:17:59 multinode-561755 kubelet[2933]: E0916 14:17:59.685394    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496279685123465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:17:59 multinode-561755 kubelet[2933]: E0916 14:17:59.685432    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496279685123465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:18:09 multinode-561755 kubelet[2933]: E0916 14:18:09.686779    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496289686093335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:18:09 multinode-561755 kubelet[2933]: E0916 14:18:09.686813    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496289686093335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:18:19 multinode-561755 kubelet[2933]: E0916 14:18:19.625973    2933 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 14:18:19 multinode-561755 kubelet[2933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 14:18:19 multinode-561755 kubelet[2933]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 14:18:19 multinode-561755 kubelet[2933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 14:18:19 multinode-561755 kubelet[2933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 14:18:19 multinode-561755 kubelet[2933]: E0916 14:18:19.688610    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496299688367891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 14:18:19 multinode-561755 kubelet[2933]: E0916 14:18:19.688630    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496299688367891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 14:18:26.164604  755705 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19652-713072/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-561755 -n multinode-561755
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-561755 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.35s)

                                                
                                    
x
+
TestPreload (166.09s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-848370 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-848370 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m30.637202181s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-848370 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-848370 image pull gcr.io/k8s-minikube/busybox: (2.172592121s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-848370
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-848370: (7.280254327s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-848370 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-848370 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m2.998700725s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-848370 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-09-16 14:25:26.884903838 +0000 UTC m=+5591.241241277
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-848370 -n test-preload-848370
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-848370 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-848370 logs -n 25: (1.029838079s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-561755 ssh -n                                                                 | multinode-561755     | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | multinode-561755-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-561755 ssh -n multinode-561755 sudo cat                                       | multinode-561755     | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | /home/docker/cp-test_multinode-561755-m03_multinode-561755.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-561755 cp multinode-561755-m03:/home/docker/cp-test.txt                       | multinode-561755     | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | multinode-561755-m02:/home/docker/cp-test_multinode-561755-m03_multinode-561755-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-561755 ssh -n                                                                 | multinode-561755     | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | multinode-561755-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-561755 ssh -n multinode-561755-m02 sudo cat                                   | multinode-561755     | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | /home/docker/cp-test_multinode-561755-m03_multinode-561755-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-561755 node stop m03                                                          | multinode-561755     | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	| node    | multinode-561755 node start                                                             | multinode-561755     | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC | 16 Sep 24 14:10 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-561755                                                                | multinode-561755     | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC |                     |
	| stop    | -p multinode-561755                                                                     | multinode-561755     | jenkins | v1.34.0 | 16 Sep 24 14:10 UTC |                     |
	| start   | -p multinode-561755                                                                     | multinode-561755     | jenkins | v1.34.0 | 16 Sep 24 14:12 UTC | 16 Sep 24 14:16 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-561755                                                                | multinode-561755     | jenkins | v1.34.0 | 16 Sep 24 14:16 UTC |                     |
	| node    | multinode-561755 node delete                                                            | multinode-561755     | jenkins | v1.34.0 | 16 Sep 24 14:16 UTC | 16 Sep 24 14:16 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-561755 stop                                                                   | multinode-561755     | jenkins | v1.34.0 | 16 Sep 24 14:16 UTC |                     |
	| start   | -p multinode-561755                                                                     | multinode-561755     | jenkins | v1.34.0 | 16 Sep 24 14:18 UTC | 16 Sep 24 14:21 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-561755                                                                | multinode-561755     | jenkins | v1.34.0 | 16 Sep 24 14:21 UTC |                     |
	| start   | -p multinode-561755-m02                                                                 | multinode-561755-m02 | jenkins | v1.34.0 | 16 Sep 24 14:21 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-561755-m03                                                                 | multinode-561755-m03 | jenkins | v1.34.0 | 16 Sep 24 14:21 UTC | 16 Sep 24 14:22 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-561755                                                                 | multinode-561755     | jenkins | v1.34.0 | 16 Sep 24 14:22 UTC |                     |
	| delete  | -p multinode-561755-m03                                                                 | multinode-561755-m03 | jenkins | v1.34.0 | 16 Sep 24 14:22 UTC | 16 Sep 24 14:22 UTC |
	| delete  | -p multinode-561755                                                                     | multinode-561755     | jenkins | v1.34.0 | 16 Sep 24 14:22 UTC | 16 Sep 24 14:22 UTC |
	| start   | -p test-preload-848370                                                                  | test-preload-848370  | jenkins | v1.34.0 | 16 Sep 24 14:22 UTC | 16 Sep 24 14:24 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-848370 image pull                                                          | test-preload-848370  | jenkins | v1.34.0 | 16 Sep 24 14:24 UTC | 16 Sep 24 14:24 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-848370                                                                  | test-preload-848370  | jenkins | v1.34.0 | 16 Sep 24 14:24 UTC | 16 Sep 24 14:24 UTC |
	| start   | -p test-preload-848370                                                                  | test-preload-848370  | jenkins | v1.34.0 | 16 Sep 24 14:24 UTC | 16 Sep 24 14:25 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-848370 image list                                                          | test-preload-848370  | jenkins | v1.34.0 | 16 Sep 24 14:25 UTC | 16 Sep 24 14:25 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 14:24:23
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 14:24:23.717991  758184 out.go:345] Setting OutFile to fd 1 ...
	I0916 14:24:23.718117  758184 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 14:24:23.718126  758184 out.go:358] Setting ErrFile to fd 2...
	I0916 14:24:23.718131  758184 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 14:24:23.718302  758184 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
	I0916 14:24:23.718823  758184 out.go:352] Setting JSON to false
	I0916 14:24:23.719779  758184 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":14813,"bootTime":1726481851,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 14:24:23.719865  758184 start.go:139] virtualization: kvm guest
	I0916 14:24:23.721919  758184 out.go:177] * [test-preload-848370] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 14:24:23.723377  758184 out.go:177]   - MINIKUBE_LOCATION=19652
	I0916 14:24:23.723384  758184 notify.go:220] Checking for updates...
	I0916 14:24:23.725889  758184 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 14:24:23.726941  758184 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19652-713072/kubeconfig
	I0916 14:24:23.728026  758184 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 14:24:23.729207  758184 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 14:24:23.730475  758184 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 14:24:23.731949  758184 config.go:182] Loaded profile config "test-preload-848370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0916 14:24:23.732410  758184 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 14:24:23.732445  758184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 14:24:23.746744  758184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43271
	I0916 14:24:23.747196  758184 main.go:141] libmachine: () Calling .GetVersion
	I0916 14:24:23.747727  758184 main.go:141] libmachine: Using API Version  1
	I0916 14:24:23.747747  758184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 14:24:23.748052  758184 main.go:141] libmachine: () Calling .GetMachineName
	I0916 14:24:23.748231  758184 main.go:141] libmachine: (test-preload-848370) Calling .DriverName
	I0916 14:24:23.749759  758184 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0916 14:24:23.750862  758184 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 14:24:23.751151  758184 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 14:24:23.751194  758184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 14:24:23.764774  758184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42189
	I0916 14:24:23.765143  758184 main.go:141] libmachine: () Calling .GetVersion
	I0916 14:24:23.765563  758184 main.go:141] libmachine: Using API Version  1
	I0916 14:24:23.765588  758184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 14:24:23.765925  758184 main.go:141] libmachine: () Calling .GetMachineName
	I0916 14:24:23.766121  758184 main.go:141] libmachine: (test-preload-848370) Calling .DriverName
	I0916 14:24:23.799315  758184 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 14:24:23.800531  758184 start.go:297] selected driver: kvm2
	I0916 14:24:23.800545  758184 start.go:901] validating driver "kvm2" against &{Name:test-preload-848370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-848370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 14:24:23.800671  758184 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 14:24:23.801412  758184 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 14:24:23.801487  758184 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19652-713072/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 14:24:23.815888  758184 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 14:24:23.816211  758184 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 14:24:23.816243  758184 cni.go:84] Creating CNI manager for ""
	I0916 14:24:23.816289  758184 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 14:24:23.816349  758184 start.go:340] cluster config:
	{Name:test-preload-848370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-848370 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 14:24:23.816456  758184 iso.go:125] acquiring lock: {Name:mk66d96ffbd424a8ca76a8604dfbe200d58305de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 14:24:23.818397  758184 out.go:177] * Starting "test-preload-848370" primary control-plane node in "test-preload-848370" cluster
	I0916 14:24:23.819470  758184 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0916 14:24:23.847629  758184 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0916 14:24:23.847652  758184 cache.go:56] Caching tarball of preloaded images
	I0916 14:24:23.847796  758184 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0916 14:24:23.849226  758184 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0916 14:24:23.850357  758184 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0916 14:24:23.877707  758184 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0916 14:24:28.004803  758184 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0916 14:24:28.004926  758184 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0916 14:24:28.869989  758184 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0916 14:24:28.870142  758184 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/test-preload-848370/config.json ...
	I0916 14:24:28.870387  758184 start.go:360] acquireMachinesLock for test-preload-848370: {Name:mke8f8f8ba61009cdea7a3d88b50b9f6ae6e1362 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 14:24:28.870463  758184 start.go:364] duration metric: took 50.476µs to acquireMachinesLock for "test-preload-848370"
	I0916 14:24:28.870486  758184 start.go:96] Skipping create...Using existing machine configuration
	I0916 14:24:28.870494  758184 fix.go:54] fixHost starting: 
	I0916 14:24:28.870785  758184 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 14:24:28.870850  758184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 14:24:28.885745  758184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41629
	I0916 14:24:28.886245  758184 main.go:141] libmachine: () Calling .GetVersion
	I0916 14:24:28.886774  758184 main.go:141] libmachine: Using API Version  1
	I0916 14:24:28.886799  758184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 14:24:28.887100  758184 main.go:141] libmachine: () Calling .GetMachineName
	I0916 14:24:28.887274  758184 main.go:141] libmachine: (test-preload-848370) Calling .DriverName
	I0916 14:24:28.887429  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetState
	I0916 14:24:28.889022  758184 fix.go:112] recreateIfNeeded on test-preload-848370: state=Stopped err=<nil>
	I0916 14:24:28.889049  758184 main.go:141] libmachine: (test-preload-848370) Calling .DriverName
	W0916 14:24:28.889190  758184 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 14:24:28.891048  758184 out.go:177] * Restarting existing kvm2 VM for "test-preload-848370" ...
	I0916 14:24:28.892035  758184 main.go:141] libmachine: (test-preload-848370) Calling .Start
	I0916 14:24:28.892178  758184 main.go:141] libmachine: (test-preload-848370) Ensuring networks are active...
	I0916 14:24:28.892821  758184 main.go:141] libmachine: (test-preload-848370) Ensuring network default is active
	I0916 14:24:28.893150  758184 main.go:141] libmachine: (test-preload-848370) Ensuring network mk-test-preload-848370 is active
	I0916 14:24:28.893462  758184 main.go:141] libmachine: (test-preload-848370) Getting domain xml...
	I0916 14:24:28.894120  758184 main.go:141] libmachine: (test-preload-848370) Creating domain...
	I0916 14:24:30.061976  758184 main.go:141] libmachine: (test-preload-848370) Waiting to get IP...
	I0916 14:24:30.062767  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:30.063204  758184 main.go:141] libmachine: (test-preload-848370) DBG | unable to find current IP address of domain test-preload-848370 in network mk-test-preload-848370
	I0916 14:24:30.063293  758184 main.go:141] libmachine: (test-preload-848370) DBG | I0916 14:24:30.063206  758235 retry.go:31] will retry after 246.611624ms: waiting for machine to come up
	I0916 14:24:30.311804  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:30.312181  758184 main.go:141] libmachine: (test-preload-848370) DBG | unable to find current IP address of domain test-preload-848370 in network mk-test-preload-848370
	I0916 14:24:30.312210  758184 main.go:141] libmachine: (test-preload-848370) DBG | I0916 14:24:30.312140  758235 retry.go:31] will retry after 273.785718ms: waiting for machine to come up
	I0916 14:24:30.587589  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:30.587998  758184 main.go:141] libmachine: (test-preload-848370) DBG | unable to find current IP address of domain test-preload-848370 in network mk-test-preload-848370
	I0916 14:24:30.588023  758184 main.go:141] libmachine: (test-preload-848370) DBG | I0916 14:24:30.587948  758235 retry.go:31] will retry after 382.844298ms: waiting for machine to come up
	I0916 14:24:30.972448  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:30.972875  758184 main.go:141] libmachine: (test-preload-848370) DBG | unable to find current IP address of domain test-preload-848370 in network mk-test-preload-848370
	I0916 14:24:30.972903  758184 main.go:141] libmachine: (test-preload-848370) DBG | I0916 14:24:30.972834  758235 retry.go:31] will retry after 375.43728ms: waiting for machine to come up
	I0916 14:24:31.349479  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:31.349915  758184 main.go:141] libmachine: (test-preload-848370) DBG | unable to find current IP address of domain test-preload-848370 in network mk-test-preload-848370
	I0916 14:24:31.349955  758184 main.go:141] libmachine: (test-preload-848370) DBG | I0916 14:24:31.349876  758235 retry.go:31] will retry after 538.945625ms: waiting for machine to come up
	I0916 14:24:31.890655  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:31.891026  758184 main.go:141] libmachine: (test-preload-848370) DBG | unable to find current IP address of domain test-preload-848370 in network mk-test-preload-848370
	I0916 14:24:31.891055  758184 main.go:141] libmachine: (test-preload-848370) DBG | I0916 14:24:31.890963  758235 retry.go:31] will retry after 775.669132ms: waiting for machine to come up
	I0916 14:24:32.667830  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:32.668188  758184 main.go:141] libmachine: (test-preload-848370) DBG | unable to find current IP address of domain test-preload-848370 in network mk-test-preload-848370
	I0916 14:24:32.668213  758184 main.go:141] libmachine: (test-preload-848370) DBG | I0916 14:24:32.668114  758235 retry.go:31] will retry after 735.821505ms: waiting for machine to come up
	I0916 14:24:33.405188  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:33.405580  758184 main.go:141] libmachine: (test-preload-848370) DBG | unable to find current IP address of domain test-preload-848370 in network mk-test-preload-848370
	I0916 14:24:33.405607  758184 main.go:141] libmachine: (test-preload-848370) DBG | I0916 14:24:33.405530  758235 retry.go:31] will retry after 1.37140127s: waiting for machine to come up
	I0916 14:24:34.778762  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:34.779111  758184 main.go:141] libmachine: (test-preload-848370) DBG | unable to find current IP address of domain test-preload-848370 in network mk-test-preload-848370
	I0916 14:24:34.779137  758184 main.go:141] libmachine: (test-preload-848370) DBG | I0916 14:24:34.779063  758235 retry.go:31] will retry after 1.373755614s: waiting for machine to come up
	I0916 14:24:36.154187  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:36.154618  758184 main.go:141] libmachine: (test-preload-848370) DBG | unable to find current IP address of domain test-preload-848370 in network mk-test-preload-848370
	I0916 14:24:36.154647  758184 main.go:141] libmachine: (test-preload-848370) DBG | I0916 14:24:36.154565  758235 retry.go:31] will retry after 1.797467658s: waiting for machine to come up
	I0916 14:24:37.954594  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:37.955022  758184 main.go:141] libmachine: (test-preload-848370) DBG | unable to find current IP address of domain test-preload-848370 in network mk-test-preload-848370
	I0916 14:24:37.955052  758184 main.go:141] libmachine: (test-preload-848370) DBG | I0916 14:24:37.954951  758235 retry.go:31] will retry after 2.230886815s: waiting for machine to come up
	I0916 14:24:40.187939  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:40.188465  758184 main.go:141] libmachine: (test-preload-848370) DBG | unable to find current IP address of domain test-preload-848370 in network mk-test-preload-848370
	I0916 14:24:40.188493  758184 main.go:141] libmachine: (test-preload-848370) DBG | I0916 14:24:40.188398  758235 retry.go:31] will retry after 2.253873542s: waiting for machine to come up
	I0916 14:24:42.444713  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:42.445142  758184 main.go:141] libmachine: (test-preload-848370) DBG | unable to find current IP address of domain test-preload-848370 in network mk-test-preload-848370
	I0916 14:24:42.445169  758184 main.go:141] libmachine: (test-preload-848370) DBG | I0916 14:24:42.445067  758235 retry.go:31] will retry after 4.520861443s: waiting for machine to come up
	I0916 14:24:46.969271  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:46.969725  758184 main.go:141] libmachine: (test-preload-848370) Found IP for machine: 192.168.39.56
	I0916 14:24:46.969758  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has current primary IP address 192.168.39.56 and MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:46.969765  758184 main.go:141] libmachine: (test-preload-848370) Reserving static IP address...
	I0916 14:24:46.970177  758184 main.go:141] libmachine: (test-preload-848370) DBG | found host DHCP lease matching {name: "test-preload-848370", mac: "52:54:00:f9:25:54", ip: "192.168.39.56"} in network mk-test-preload-848370: {Iface:virbr1 ExpiryTime:2024-09-16 15:24:39 +0000 UTC Type:0 Mac:52:54:00:f9:25:54 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-848370 Clientid:01:52:54:00:f9:25:54}
	I0916 14:24:46.970207  758184 main.go:141] libmachine: (test-preload-848370) Reserved static IP address: 192.168.39.56
	I0916 14:24:46.970227  758184 main.go:141] libmachine: (test-preload-848370) DBG | skip adding static IP to network mk-test-preload-848370 - found existing host DHCP lease matching {name: "test-preload-848370", mac: "52:54:00:f9:25:54", ip: "192.168.39.56"}
	I0916 14:24:46.970240  758184 main.go:141] libmachine: (test-preload-848370) Waiting for SSH to be available...
	I0916 14:24:46.970257  758184 main.go:141] libmachine: (test-preload-848370) DBG | Getting to WaitForSSH function...
	I0916 14:24:46.972235  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:46.972492  758184 main.go:141] libmachine: (test-preload-848370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:25:54", ip: ""} in network mk-test-preload-848370: {Iface:virbr1 ExpiryTime:2024-09-16 15:24:39 +0000 UTC Type:0 Mac:52:54:00:f9:25:54 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-848370 Clientid:01:52:54:00:f9:25:54}
	I0916 14:24:46.972518  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined IP address 192.168.39.56 and MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:46.972705  758184 main.go:141] libmachine: (test-preload-848370) DBG | Using SSH client type: external
	I0916 14:24:46.972731  758184 main.go:141] libmachine: (test-preload-848370) DBG | Using SSH private key: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/test-preload-848370/id_rsa (-rw-------)
	I0916 14:24:46.972771  758184 main.go:141] libmachine: (test-preload-848370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.56 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19652-713072/.minikube/machines/test-preload-848370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 14:24:46.972790  758184 main.go:141] libmachine: (test-preload-848370) DBG | About to run SSH command:
	I0916 14:24:46.972827  758184 main.go:141] libmachine: (test-preload-848370) DBG | exit 0
	I0916 14:24:47.096887  758184 main.go:141] libmachine: (test-preload-848370) DBG | SSH cmd err, output: <nil>: 
	I0916 14:24:47.097234  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetConfigRaw
	I0916 14:24:47.097889  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetIP
	I0916 14:24:47.100081  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:47.100407  758184 main.go:141] libmachine: (test-preload-848370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:25:54", ip: ""} in network mk-test-preload-848370: {Iface:virbr1 ExpiryTime:2024-09-16 15:24:39 +0000 UTC Type:0 Mac:52:54:00:f9:25:54 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-848370 Clientid:01:52:54:00:f9:25:54}
	I0916 14:24:47.100438  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined IP address 192.168.39.56 and MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:47.100656  758184 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/test-preload-848370/config.json ...
	I0916 14:24:47.100827  758184 machine.go:93] provisionDockerMachine start ...
	I0916 14:24:47.100844  758184 main.go:141] libmachine: (test-preload-848370) Calling .DriverName
	I0916 14:24:47.101018  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHHostname
	I0916 14:24:47.102876  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:47.103120  758184 main.go:141] libmachine: (test-preload-848370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:25:54", ip: ""} in network mk-test-preload-848370: {Iface:virbr1 ExpiryTime:2024-09-16 15:24:39 +0000 UTC Type:0 Mac:52:54:00:f9:25:54 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-848370 Clientid:01:52:54:00:f9:25:54}
	I0916 14:24:47.103155  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined IP address 192.168.39.56 and MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:47.103263  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHPort
	I0916 14:24:47.103428  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHKeyPath
	I0916 14:24:47.103597  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHKeyPath
	I0916 14:24:47.103715  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHUsername
	I0916 14:24:47.103871  758184 main.go:141] libmachine: Using SSH client type: native
	I0916 14:24:47.104048  758184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0916 14:24:47.104065  758184 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 14:24:47.209562  758184 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0916 14:24:47.209587  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetMachineName
	I0916 14:24:47.209803  758184 buildroot.go:166] provisioning hostname "test-preload-848370"
	I0916 14:24:47.209830  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetMachineName
	I0916 14:24:47.210018  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHHostname
	I0916 14:24:47.212458  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:47.212773  758184 main.go:141] libmachine: (test-preload-848370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:25:54", ip: ""} in network mk-test-preload-848370: {Iface:virbr1 ExpiryTime:2024-09-16 15:24:39 +0000 UTC Type:0 Mac:52:54:00:f9:25:54 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-848370 Clientid:01:52:54:00:f9:25:54}
	I0916 14:24:47.212793  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined IP address 192.168.39.56 and MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:47.212926  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHPort
	I0916 14:24:47.213125  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHKeyPath
	I0916 14:24:47.213285  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHKeyPath
	I0916 14:24:47.213408  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHUsername
	I0916 14:24:47.213568  758184 main.go:141] libmachine: Using SSH client type: native
	I0916 14:24:47.213795  758184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0916 14:24:47.213808  758184 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-848370 && echo "test-preload-848370" | sudo tee /etc/hostname
	I0916 14:24:47.330889  758184 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-848370
	
	I0916 14:24:47.330913  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHHostname
	I0916 14:24:47.333276  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:47.333569  758184 main.go:141] libmachine: (test-preload-848370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:25:54", ip: ""} in network mk-test-preload-848370: {Iface:virbr1 ExpiryTime:2024-09-16 15:24:39 +0000 UTC Type:0 Mac:52:54:00:f9:25:54 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-848370 Clientid:01:52:54:00:f9:25:54}
	I0916 14:24:47.333613  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined IP address 192.168.39.56 and MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:47.333745  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHPort
	I0916 14:24:47.333910  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHKeyPath
	I0916 14:24:47.334033  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHKeyPath
	I0916 14:24:47.334192  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHUsername
	I0916 14:24:47.334355  758184 main.go:141] libmachine: Using SSH client type: native
	I0916 14:24:47.334583  758184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0916 14:24:47.334607  758184 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-848370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-848370/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-848370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 14:24:47.446667  758184 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 14:24:47.446764  758184 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19652-713072/.minikube CaCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19652-713072/.minikube}
	I0916 14:24:47.446790  758184 buildroot.go:174] setting up certificates
	I0916 14:24:47.446805  758184 provision.go:84] configureAuth start
	I0916 14:24:47.446819  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetMachineName
	I0916 14:24:47.447098  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetIP
	I0916 14:24:47.449655  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:47.449967  758184 main.go:141] libmachine: (test-preload-848370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:25:54", ip: ""} in network mk-test-preload-848370: {Iface:virbr1 ExpiryTime:2024-09-16 15:24:39 +0000 UTC Type:0 Mac:52:54:00:f9:25:54 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-848370 Clientid:01:52:54:00:f9:25:54}
	I0916 14:24:47.450001  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined IP address 192.168.39.56 and MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:47.450213  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHHostname
	I0916 14:24:47.452155  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:47.452490  758184 main.go:141] libmachine: (test-preload-848370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:25:54", ip: ""} in network mk-test-preload-848370: {Iface:virbr1 ExpiryTime:2024-09-16 15:24:39 +0000 UTC Type:0 Mac:52:54:00:f9:25:54 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-848370 Clientid:01:52:54:00:f9:25:54}
	I0916 14:24:47.452516  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined IP address 192.168.39.56 and MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:47.452645  758184 provision.go:143] copyHostCerts
	I0916 14:24:47.452703  758184 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem, removing ...
	I0916 14:24:47.452714  758184 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem
	I0916 14:24:47.452800  758184 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem (1082 bytes)
	I0916 14:24:47.452912  758184 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem, removing ...
	I0916 14:24:47.452923  758184 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem
	I0916 14:24:47.452951  758184 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem (1123 bytes)
	I0916 14:24:47.453005  758184 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem, removing ...
	I0916 14:24:47.453012  758184 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem
	I0916 14:24:47.453033  758184 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem (1679 bytes)
	I0916 14:24:47.453082  758184 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem org=jenkins.test-preload-848370 san=[127.0.0.1 192.168.39.56 localhost minikube test-preload-848370]
	I0916 14:24:47.532611  758184 provision.go:177] copyRemoteCerts
	I0916 14:24:47.532670  758184 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 14:24:47.532700  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHHostname
	I0916 14:24:47.535173  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:47.535472  758184 main.go:141] libmachine: (test-preload-848370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:25:54", ip: ""} in network mk-test-preload-848370: {Iface:virbr1 ExpiryTime:2024-09-16 15:24:39 +0000 UTC Type:0 Mac:52:54:00:f9:25:54 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-848370 Clientid:01:52:54:00:f9:25:54}
	I0916 14:24:47.535499  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined IP address 192.168.39.56 and MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:47.535628  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHPort
	I0916 14:24:47.535821  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHKeyPath
	I0916 14:24:47.535963  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHUsername
	I0916 14:24:47.536083  758184 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/test-preload-848370/id_rsa Username:docker}
	I0916 14:24:47.619054  758184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 14:24:47.642524  758184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0916 14:24:47.665023  758184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 14:24:47.687099  758184 provision.go:87] duration metric: took 240.282538ms to configureAuth
	I0916 14:24:47.687121  758184 buildroot.go:189] setting minikube options for container-runtime
	I0916 14:24:47.687286  758184 config.go:182] Loaded profile config "test-preload-848370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0916 14:24:47.687365  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHHostname
	I0916 14:24:47.690096  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:47.690442  758184 main.go:141] libmachine: (test-preload-848370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:25:54", ip: ""} in network mk-test-preload-848370: {Iface:virbr1 ExpiryTime:2024-09-16 15:24:39 +0000 UTC Type:0 Mac:52:54:00:f9:25:54 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-848370 Clientid:01:52:54:00:f9:25:54}
	I0916 14:24:47.690467  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined IP address 192.168.39.56 and MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:47.690678  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHPort
	I0916 14:24:47.690892  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHKeyPath
	I0916 14:24:47.691053  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHKeyPath
	I0916 14:24:47.691192  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHUsername
	I0916 14:24:47.691326  758184 main.go:141] libmachine: Using SSH client type: native
	I0916 14:24:47.691547  758184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0916 14:24:47.691565  758184 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 14:24:47.918777  758184 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 14:24:47.918811  758184 machine.go:96] duration metric: took 817.972608ms to provisionDockerMachine
	I0916 14:24:47.918823  758184 start.go:293] postStartSetup for "test-preload-848370" (driver="kvm2")
	I0916 14:24:47.918836  758184 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 14:24:47.918861  758184 main.go:141] libmachine: (test-preload-848370) Calling .DriverName
	I0916 14:24:47.919222  758184 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 14:24:47.919260  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHHostname
	I0916 14:24:47.921589  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:47.921892  758184 main.go:141] libmachine: (test-preload-848370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:25:54", ip: ""} in network mk-test-preload-848370: {Iface:virbr1 ExpiryTime:2024-09-16 15:24:39 +0000 UTC Type:0 Mac:52:54:00:f9:25:54 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-848370 Clientid:01:52:54:00:f9:25:54}
	I0916 14:24:47.921917  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined IP address 192.168.39.56 and MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:47.922039  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHPort
	I0916 14:24:47.922228  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHKeyPath
	I0916 14:24:47.922385  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHUsername
	I0916 14:24:47.922509  758184 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/test-preload-848370/id_rsa Username:docker}
	I0916 14:24:48.003942  758184 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 14:24:48.007889  758184 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 14:24:48.007917  758184 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/addons for local assets ...
	I0916 14:24:48.008003  758184 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/files for local assets ...
	I0916 14:24:48.008099  758184 filesync.go:149] local asset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> 7205442.pem in /etc/ssl/certs
	I0916 14:24:48.008202  758184 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 14:24:48.017078  758184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 14:24:48.040698  758184 start.go:296] duration metric: took 121.833393ms for postStartSetup
	I0916 14:24:48.040746  758184 fix.go:56] duration metric: took 19.170252232s for fixHost
	I0916 14:24:48.040772  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHHostname
	I0916 14:24:48.043471  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:48.043793  758184 main.go:141] libmachine: (test-preload-848370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:25:54", ip: ""} in network mk-test-preload-848370: {Iface:virbr1 ExpiryTime:2024-09-16 15:24:39 +0000 UTC Type:0 Mac:52:54:00:f9:25:54 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-848370 Clientid:01:52:54:00:f9:25:54}
	I0916 14:24:48.043825  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined IP address 192.168.39.56 and MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:48.043999  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHPort
	I0916 14:24:48.044230  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHKeyPath
	I0916 14:24:48.044409  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHKeyPath
	I0916 14:24:48.044544  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHUsername
	I0916 14:24:48.044719  758184 main.go:141] libmachine: Using SSH client type: native
	I0916 14:24:48.044889  758184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0916 14:24:48.044902  758184 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 14:24:48.149928  758184 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726496688.125245094
	
	I0916 14:24:48.149957  758184 fix.go:216] guest clock: 1726496688.125245094
	I0916 14:24:48.149964  758184 fix.go:229] Guest: 2024-09-16 14:24:48.125245094 +0000 UTC Remote: 2024-09-16 14:24:48.040751018 +0000 UTC m=+24.356239155 (delta=84.494076ms)
	I0916 14:24:48.149986  758184 fix.go:200] guest clock delta is within tolerance: 84.494076ms
	I0916 14:24:48.149991  758184 start.go:83] releasing machines lock for "test-preload-848370", held for 19.279515444s
	I0916 14:24:48.150009  758184 main.go:141] libmachine: (test-preload-848370) Calling .DriverName
	I0916 14:24:48.150271  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetIP
	I0916 14:24:48.152352  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:48.152662  758184 main.go:141] libmachine: (test-preload-848370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:25:54", ip: ""} in network mk-test-preload-848370: {Iface:virbr1 ExpiryTime:2024-09-16 15:24:39 +0000 UTC Type:0 Mac:52:54:00:f9:25:54 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-848370 Clientid:01:52:54:00:f9:25:54}
	I0916 14:24:48.152693  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined IP address 192.168.39.56 and MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:48.152813  758184 main.go:141] libmachine: (test-preload-848370) Calling .DriverName
	I0916 14:24:48.153270  758184 main.go:141] libmachine: (test-preload-848370) Calling .DriverName
	I0916 14:24:48.153446  758184 main.go:141] libmachine: (test-preload-848370) Calling .DriverName
	I0916 14:24:48.153561  758184 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 14:24:48.153614  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHHostname
	I0916 14:24:48.153631  758184 ssh_runner.go:195] Run: cat /version.json
	I0916 14:24:48.153651  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHHostname
	I0916 14:24:48.155982  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:48.156280  758184 main.go:141] libmachine: (test-preload-848370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:25:54", ip: ""} in network mk-test-preload-848370: {Iface:virbr1 ExpiryTime:2024-09-16 15:24:39 +0000 UTC Type:0 Mac:52:54:00:f9:25:54 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-848370 Clientid:01:52:54:00:f9:25:54}
	I0916 14:24:48.156302  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:48.156324  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined IP address 192.168.39.56 and MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:48.156549  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHPort
	I0916 14:24:48.156711  758184 main.go:141] libmachine: (test-preload-848370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:25:54", ip: ""} in network mk-test-preload-848370: {Iface:virbr1 ExpiryTime:2024-09-16 15:24:39 +0000 UTC Type:0 Mac:52:54:00:f9:25:54 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-848370 Clientid:01:52:54:00:f9:25:54}
	I0916 14:24:48.156731  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined IP address 192.168.39.56 and MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:48.156714  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHKeyPath
	I0916 14:24:48.156869  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHUsername
	I0916 14:24:48.156928  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHPort
	I0916 14:24:48.157003  758184 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/test-preload-848370/id_rsa Username:docker}
	I0916 14:24:48.157084  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHKeyPath
	I0916 14:24:48.157221  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHUsername
	I0916 14:24:48.157366  758184 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/test-preload-848370/id_rsa Username:docker}
	I0916 14:24:48.252746  758184 ssh_runner.go:195] Run: systemctl --version
	I0916 14:24:48.258508  758184 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 14:24:48.397697  758184 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 14:24:48.404066  758184 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 14:24:48.404129  758184 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 14:24:48.419071  758184 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 14:24:48.419100  758184 start.go:495] detecting cgroup driver to use...
	I0916 14:24:48.419170  758184 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 14:24:48.435630  758184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 14:24:48.448370  758184 docker.go:217] disabling cri-docker service (if available) ...
	I0916 14:24:48.448415  758184 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 14:24:48.460752  758184 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 14:24:48.473417  758184 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 14:24:48.586245  758184 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 14:24:48.739008  758184 docker.go:233] disabling docker service ...
	I0916 14:24:48.739093  758184 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 14:24:48.753808  758184 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 14:24:48.766459  758184 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 14:24:48.887796  758184 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 14:24:49.015579  758184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 14:24:49.029704  758184 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 14:24:49.047871  758184 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0916 14:24:49.047934  758184 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:24:49.058101  758184 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 14:24:49.058175  758184 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:24:49.068277  758184 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:24:49.078828  758184 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:24:49.089258  758184 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 14:24:49.099714  758184 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:24:49.111080  758184 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:24:49.128505  758184 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:24:49.138539  758184 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 14:24:49.147536  758184 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 14:24:49.147606  758184 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 14:24:49.160746  758184 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 14:24:49.170063  758184 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 14:24:49.290201  758184 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 14:24:49.386159  758184 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 14:24:49.386230  758184 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 14:24:49.391106  758184 start.go:563] Will wait 60s for crictl version
	I0916 14:24:49.391150  758184 ssh_runner.go:195] Run: which crictl
	I0916 14:24:49.394708  758184 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 14:24:49.430829  758184 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 14:24:49.430916  758184 ssh_runner.go:195] Run: crio --version
	I0916 14:24:49.459636  758184 ssh_runner.go:195] Run: crio --version
	I0916 14:24:49.490626  758184 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0916 14:24:49.491763  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetIP
	I0916 14:24:49.494042  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:49.494406  758184 main.go:141] libmachine: (test-preload-848370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:25:54", ip: ""} in network mk-test-preload-848370: {Iface:virbr1 ExpiryTime:2024-09-16 15:24:39 +0000 UTC Type:0 Mac:52:54:00:f9:25:54 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-848370 Clientid:01:52:54:00:f9:25:54}
	I0916 14:24:49.494434  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined IP address 192.168.39.56 and MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:24:49.494640  758184 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 14:24:49.498814  758184 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 14:24:49.511037  758184 kubeadm.go:883] updating cluster {Name:test-preload-848370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-848370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 14:24:49.511171  758184 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0916 14:24:49.511221  758184 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 14:24:49.550237  758184 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0916 14:24:49.550304  758184 ssh_runner.go:195] Run: which lz4
	I0916 14:24:49.554466  758184 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 14:24:49.558645  758184 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 14:24:49.558682  758184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0916 14:24:51.025994  758184 crio.go:462] duration metric: took 1.471585277s to copy over tarball
	I0916 14:24:51.026103  758184 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 14:24:53.300059  758184 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.273919751s)
	I0916 14:24:53.300093  758184 crio.go:469] duration metric: took 2.274068643s to extract the tarball
	I0916 14:24:53.300103  758184 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 14:24:53.340642  758184 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 14:24:53.381502  758184 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0916 14:24:53.381526  758184 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0916 14:24:53.381598  758184 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 14:24:53.381619  758184 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0916 14:24:53.381633  758184 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0916 14:24:53.381649  758184 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0916 14:24:53.381619  758184 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0916 14:24:53.381735  758184 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0916 14:24:53.381737  758184 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0916 14:24:53.381781  758184 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0916 14:24:53.383106  758184 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0916 14:24:53.383112  758184 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0916 14:24:53.383109  758184 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0916 14:24:53.383137  758184 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0916 14:24:53.383145  758184 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 14:24:53.383111  758184 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0916 14:24:53.383108  758184 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0916 14:24:53.383176  758184 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0916 14:24:53.543402  758184 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0916 14:24:53.547671  758184 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0916 14:24:53.552412  758184 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0916 14:24:53.555760  758184 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0916 14:24:53.558880  758184 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0916 14:24:53.569834  758184 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0916 14:24:53.593480  758184 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0916 14:24:53.639298  758184 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0916 14:24:53.639337  758184 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0916 14:24:53.639374  758184 ssh_runner.go:195] Run: which crictl
	I0916 14:24:53.720147  758184 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0916 14:24:53.720195  758184 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0916 14:24:53.720220  758184 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0916 14:24:53.720246  758184 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0916 14:24:53.720250  758184 ssh_runner.go:195] Run: which crictl
	I0916 14:24:53.720276  758184 ssh_runner.go:195] Run: which crictl
	I0916 14:24:53.731028  758184 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0916 14:24:53.731067  758184 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0916 14:24:53.731078  758184 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0916 14:24:53.731095  758184 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0916 14:24:53.731107  758184 ssh_runner.go:195] Run: which crictl
	I0916 14:24:53.731122  758184 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0916 14:24:53.731122  758184 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0916 14:24:53.731137  758184 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0916 14:24:53.731162  758184 ssh_runner.go:195] Run: which crictl
	I0916 14:24:53.731167  758184 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0916 14:24:53.731195  758184 ssh_runner.go:195] Run: which crictl
	I0916 14:24:53.731207  758184 ssh_runner.go:195] Run: which crictl
	I0916 14:24:53.731226  758184 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0916 14:24:53.731303  758184 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0916 14:24:53.731311  758184 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0916 14:24:53.801047  758184 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0916 14:24:53.801080  758184 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0916 14:24:53.801135  758184 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0916 14:24:53.801144  758184 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0916 14:24:53.801202  758184 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0916 14:24:53.801255  758184 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0916 14:24:53.801305  758184 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0916 14:24:53.954558  758184 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0916 14:24:53.954610  758184 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0916 14:24:53.954649  758184 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0916 14:24:53.954720  758184 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0916 14:24:53.954775  758184 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0916 14:24:53.954838  758184 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0916 14:24:53.954870  758184 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0916 14:24:54.087323  758184 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0916 14:24:54.087485  758184 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0916 14:24:54.093121  758184 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0916 14:24:54.093233  758184 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0916 14:24:54.095757  758184 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0916 14:24:54.095846  758184 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0916 14:24:54.095853  758184 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0916 14:24:54.095923  758184 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0916 14:24:54.095945  758184 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0916 14:24:54.095983  758184 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0916 14:24:54.170415  758184 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0916 14:24:54.170440  758184 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0916 14:24:54.170533  758184 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0916 14:24:54.170535  758184 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0916 14:24:54.183651  758184 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0916 14:24:54.183726  758184 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0916 14:24:54.183734  758184 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0916 14:24:54.183787  758184 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0916 14:24:54.183807  758184 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0916 14:24:54.183828  758184 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0916 14:24:54.183831  758184 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0916 14:24:54.183842  758184 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0916 14:24:54.183860  758184 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0916 14:24:54.183897  758184 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0916 14:24:54.183930  758184 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0916 14:24:54.191193  758184 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0916 14:24:54.228776  758184 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 14:24:58.144131  758184 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (3.96037618s)
	I0916 14:24:58.144182  758184 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0916 14:24:58.144144  758184 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4: (3.96028575s)
	I0916 14:24:58.144184  758184 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.915384803s)
	I0916 14:24:58.144197  758184 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19652-713072/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0916 14:24:58.144218  758184 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0916 14:24:58.144266  758184 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0916 14:24:58.591775  758184 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19652-713072/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0916 14:24:58.591819  758184 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0916 14:24:58.591869  758184 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0916 14:24:59.338652  758184 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19652-713072/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0916 14:24:59.338710  758184 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0916 14:24:59.338774  758184 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0916 14:24:59.477092  758184 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19652-713072/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0916 14:24:59.477139  758184 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0916 14:24:59.477189  758184 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0916 14:24:59.923804  758184 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19652-713072/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0916 14:24:59.923856  758184 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0916 14:24:59.923931  758184 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0916 14:25:02.175069  758184 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.251109756s)
	I0916 14:25:02.175114  758184 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19652-713072/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0916 14:25:02.175146  758184 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0916 14:25:02.175198  758184 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0916 14:25:03.018881  758184 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19652-713072/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0916 14:25:03.018938  758184 cache_images.go:123] Successfully loaded all cached images
	I0916 14:25:03.018946  758184 cache_images.go:92] duration metric: took 9.637406528s to LoadCachedImages
	I0916 14:25:03.018963  758184 kubeadm.go:934] updating node { 192.168.39.56 8443 v1.24.4 crio true true} ...
	I0916 14:25:03.019099  758184 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-848370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-848370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 14:25:03.019174  758184 ssh_runner.go:195] Run: crio config
	I0916 14:25:03.065303  758184 cni.go:84] Creating CNI manager for ""
	I0916 14:25:03.065327  758184 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 14:25:03.065343  758184 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 14:25:03.065361  758184 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.56 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-848370 NodeName:test-preload-848370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 14:25:03.065498  758184 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.56
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-848370"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.56
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.56"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 14:25:03.065566  758184 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0916 14:25:03.075995  758184 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 14:25:03.076065  758184 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 14:25:03.086155  758184 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0916 14:25:03.103184  758184 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 14:25:03.119985  758184 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0916 14:25:03.137400  758184 ssh_runner.go:195] Run: grep 192.168.39.56	control-plane.minikube.internal$ /etc/hosts
	I0916 14:25:03.141335  758184 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.56	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 14:25:03.154321  758184 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 14:25:03.281894  758184 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 14:25:03.300035  758184 certs.go:68] Setting up /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/test-preload-848370 for IP: 192.168.39.56
	I0916 14:25:03.300066  758184 certs.go:194] generating shared ca certs ...
	I0916 14:25:03.300089  758184 certs.go:226] acquiring lock for ca certs: {Name:mk25b35916ff3ff3777938e3e2b7794965f8a707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 14:25:03.300273  758184 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key
	I0916 14:25:03.300349  758184 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key
	I0916 14:25:03.300364  758184 certs.go:256] generating profile certs ...
	I0916 14:25:03.300498  758184 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/test-preload-848370/client.key
	I0916 14:25:03.300584  758184 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/test-preload-848370/apiserver.key.547b1fd2
	I0916 14:25:03.300646  758184 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/test-preload-848370/proxy-client.key
	I0916 14:25:03.300817  758184 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem (1338 bytes)
	W0916 14:25:03.300863  758184 certs.go:480] ignoring /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544_empty.pem, impossibly tiny 0 bytes
	I0916 14:25:03.300883  758184 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 14:25:03.300909  758184 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem (1082 bytes)
	I0916 14:25:03.300946  758184 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem (1123 bytes)
	I0916 14:25:03.300983  758184 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem (1679 bytes)
	I0916 14:25:03.301050  758184 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 14:25:03.301940  758184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 14:25:03.347381  758184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 14:25:03.379323  758184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 14:25:03.408091  758184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 14:25:03.438008  758184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/test-preload-848370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0916 14:25:03.464400  758184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/test-preload-848370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 14:25:03.496898  758184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/test-preload-848370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 14:25:03.529221  758184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/test-preload-848370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 14:25:03.555031  758184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 14:25:03.579612  758184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem --> /usr/share/ca-certificates/720544.pem (1338 bytes)
	I0916 14:25:03.603597  758184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /usr/share/ca-certificates/7205442.pem (1708 bytes)
	I0916 14:25:03.628193  758184 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 14:25:03.645015  758184 ssh_runner.go:195] Run: openssl version
	I0916 14:25:03.650866  758184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 14:25:03.661453  758184 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 14:25:03.666328  758184 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 12:53 /usr/share/ca-certificates/minikubeCA.pem
	I0916 14:25:03.666416  758184 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 14:25:03.672435  758184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 14:25:03.683353  758184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/720544.pem && ln -fs /usr/share/ca-certificates/720544.pem /etc/ssl/certs/720544.pem"
	I0916 14:25:03.694320  758184 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/720544.pem
	I0916 14:25:03.699098  758184 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 13:33 /usr/share/ca-certificates/720544.pem
	I0916 14:25:03.699163  758184 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/720544.pem
	I0916 14:25:03.705086  758184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/720544.pem /etc/ssl/certs/51391683.0"
	I0916 14:25:03.717191  758184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7205442.pem && ln -fs /usr/share/ca-certificates/7205442.pem /etc/ssl/certs/7205442.pem"
	I0916 14:25:03.729419  758184 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7205442.pem
	I0916 14:25:03.734483  758184 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 13:33 /usr/share/ca-certificates/7205442.pem
	I0916 14:25:03.734531  758184 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7205442.pem
	I0916 14:25:03.740862  758184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7205442.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 14:25:03.752908  758184 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 14:25:03.757997  758184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 14:25:03.764568  758184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 14:25:03.771212  758184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 14:25:03.778379  758184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 14:25:03.785473  758184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 14:25:03.791919  758184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 14:25:03.798099  758184 kubeadm.go:392] StartCluster: {Name:test-preload-848370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-848370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 14:25:03.798191  758184 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 14:25:03.798281  758184 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 14:25:03.844486  758184 cri.go:89] found id: ""
	I0916 14:25:03.844564  758184 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 14:25:03.855005  758184 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 14:25:03.855028  758184 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 14:25:03.855081  758184 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 14:25:03.865348  758184 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 14:25:03.865870  758184 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-848370" does not appear in /home/jenkins/minikube-integration/19652-713072/kubeconfig
	I0916 14:25:03.865999  758184 kubeconfig.go:62] /home/jenkins/minikube-integration/19652-713072/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-848370" cluster setting kubeconfig missing "test-preload-848370" context setting]
	I0916 14:25:03.866343  758184 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/kubeconfig: {Name:mk84449075783d20927a7d708361081f8c4a2b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 14:25:03.866980  758184 kapi.go:59] client config for test-preload-848370: &rest.Config{Host:"https://192.168.39.56:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19652-713072/.minikube/profiles/test-preload-848370/client.crt", KeyFile:"/home/jenkins/minikube-integration/19652-713072/.minikube/profiles/test-preload-848370/client.key", CAFile:"/home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 14:25:03.867613  758184 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 14:25:03.877401  758184 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.56
	I0916 14:25:03.877435  758184 kubeadm.go:1160] stopping kube-system containers ...
	I0916 14:25:03.877448  758184 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0916 14:25:03.877530  758184 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 14:25:03.922801  758184 cri.go:89] found id: ""
	I0916 14:25:03.922868  758184 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0916 14:25:03.938342  758184 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 14:25:03.948289  758184 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 14:25:03.948318  758184 kubeadm.go:157] found existing configuration files:
	
	I0916 14:25:03.948388  758184 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 14:25:03.957586  758184 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 14:25:03.957644  758184 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 14:25:03.966970  758184 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 14:25:03.975951  758184 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 14:25:03.976011  758184 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 14:25:03.985225  758184 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 14:25:03.994224  758184 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 14:25:03.994303  758184 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 14:25:04.003542  758184 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 14:25:04.012921  758184 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 14:25:04.012980  758184 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 14:25:04.022277  758184 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 14:25:04.031841  758184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 14:25:04.129086  758184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 14:25:04.880945  758184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0916 14:25:05.131738  758184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 14:25:05.198625  758184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0916 14:25:05.264957  758184 api_server.go:52] waiting for apiserver process to appear ...
	I0916 14:25:05.265050  758184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 14:25:05.766189  758184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 14:25:06.265228  758184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 14:25:06.324964  758184 api_server.go:72] duration metric: took 1.060003764s to wait for apiserver process to appear ...
	I0916 14:25:06.325007  758184 api_server.go:88] waiting for apiserver healthz status ...
	I0916 14:25:06.325032  758184 api_server.go:253] Checking apiserver healthz at https://192.168.39.56:8443/healthz ...
	I0916 14:25:06.325552  758184 api_server.go:269] stopped: https://192.168.39.56:8443/healthz: Get "https://192.168.39.56:8443/healthz": dial tcp 192.168.39.56:8443: connect: connection refused
	I0916 14:25:06.825194  758184 api_server.go:253] Checking apiserver healthz at https://192.168.39.56:8443/healthz ...
	I0916 14:25:10.157171  758184 api_server.go:279] https://192.168.39.56:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0916 14:25:10.157215  758184 api_server.go:103] status: https://192.168.39.56:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0916 14:25:10.157230  758184 api_server.go:253] Checking apiserver healthz at https://192.168.39.56:8443/healthz ...
	I0916 14:25:10.198556  758184 api_server.go:279] https://192.168.39.56:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0916 14:25:10.198581  758184 api_server.go:103] status: https://192.168.39.56:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0916 14:25:10.325898  758184 api_server.go:253] Checking apiserver healthz at https://192.168.39.56:8443/healthz ...
	I0916 14:25:10.333972  758184 api_server.go:279] https://192.168.39.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0916 14:25:10.333999  758184 api_server.go:103] status: https://192.168.39.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0916 14:25:10.825513  758184 api_server.go:253] Checking apiserver healthz at https://192.168.39.56:8443/healthz ...
	I0916 14:25:10.830450  758184 api_server.go:279] https://192.168.39.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0916 14:25:10.830490  758184 api_server.go:103] status: https://192.168.39.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0916 14:25:11.326079  758184 api_server.go:253] Checking apiserver healthz at https://192.168.39.56:8443/healthz ...
	I0916 14:25:11.340739  758184 api_server.go:279] https://192.168.39.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0916 14:25:11.340772  758184 api_server.go:103] status: https://192.168.39.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0916 14:25:11.825317  758184 api_server.go:253] Checking apiserver healthz at https://192.168.39.56:8443/healthz ...
	I0916 14:25:11.830538  758184 api_server.go:279] https://192.168.39.56:8443/healthz returned 200:
	ok
	I0916 14:25:11.836566  758184 api_server.go:141] control plane version: v1.24.4
	I0916 14:25:11.836587  758184 api_server.go:131] duration metric: took 5.511574001s to wait for apiserver health ...
	I0916 14:25:11.836596  758184 cni.go:84] Creating CNI manager for ""
	I0916 14:25:11.836602  758184 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 14:25:11.838305  758184 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 14:25:11.839447  758184 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 14:25:11.849953  758184 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 14:25:11.867094  758184 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 14:25:11.867176  758184 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 14:25:11.867197  758184 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 14:25:11.878738  758184 system_pods.go:59] 7 kube-system pods found
	I0916 14:25:11.878774  758184 system_pods.go:61] "coredns-6d4b75cb6d-hkfld" [3b37f443-52a5-4b8f-a4bd-df007d09bb2b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 14:25:11.878784  758184 system_pods.go:61] "etcd-test-preload-848370" [5c85589c-a130-4a22-8184-dac1d800e465] Running
	I0916 14:25:11.878792  758184 system_pods.go:61] "kube-apiserver-test-preload-848370" [3996277d-7f0f-443a-b7ce-fca3cf842130] Running
	I0916 14:25:11.878799  758184 system_pods.go:61] "kube-controller-manager-test-preload-848370" [4cecc937-4ccf-439d-a731-d213ae49af58] Running
	I0916 14:25:11.878805  758184 system_pods.go:61] "kube-proxy-xf7j8" [4a26ac24-029f-4ed0-bb7e-3414e73ebd7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0916 14:25:11.878814  758184 system_pods.go:61] "kube-scheduler-test-preload-848370" [62860887-7958-47ce-8758-cce99d9d9868] Running
	I0916 14:25:11.878821  758184 system_pods.go:61] "storage-provisioner" [2c23dceb-45b1-4cad-9ff8-90ad88ba4de9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 14:25:11.878830  758184 system_pods.go:74] duration metric: took 11.716202ms to wait for pod list to return data ...
	I0916 14:25:11.878843  758184 node_conditions.go:102] verifying NodePressure condition ...
	I0916 14:25:11.882017  758184 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 14:25:11.882047  758184 node_conditions.go:123] node cpu capacity is 2
	I0916 14:25:11.882060  758184 node_conditions.go:105] duration metric: took 3.210982ms to run NodePressure ...
	I0916 14:25:11.882082  758184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 14:25:12.041354  758184 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0916 14:25:12.046485  758184 retry.go:31] will retry after 220.387922ms: kubelet not initialised
	I0916 14:25:12.271730  758184 retry.go:31] will retry after 415.389908ms: kubelet not initialised
	I0916 14:25:12.692286  758184 retry.go:31] will retry after 587.482359ms: kubelet not initialised
	I0916 14:25:13.284924  758184 retry.go:31] will retry after 1.092616226s: kubelet not initialised
	I0916 14:25:14.383410  758184 retry.go:31] will retry after 1.381273968s: kubelet not initialised
	I0916 14:25:15.770898  758184 kubeadm.go:739] kubelet initialised
	I0916 14:25:15.770925  758184 kubeadm.go:740] duration metric: took 3.72954105s waiting for restarted kubelet to initialise ...
	I0916 14:25:15.770933  758184 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 14:25:15.775756  758184 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-hkfld" in "kube-system" namespace to be "Ready" ...
	I0916 14:25:15.780241  758184 pod_ready.go:98] node "test-preload-848370" hosting pod "coredns-6d4b75cb6d-hkfld" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-848370" has status "Ready":"False"
	I0916 14:25:15.780261  758184 pod_ready.go:82] duration metric: took 4.477402ms for pod "coredns-6d4b75cb6d-hkfld" in "kube-system" namespace to be "Ready" ...
	E0916 14:25:15.780269  758184 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-848370" hosting pod "coredns-6d4b75cb6d-hkfld" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-848370" has status "Ready":"False"
	I0916 14:25:15.780278  758184 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-848370" in "kube-system" namespace to be "Ready" ...
	I0916 14:25:15.784190  758184 pod_ready.go:98] node "test-preload-848370" hosting pod "etcd-test-preload-848370" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-848370" has status "Ready":"False"
	I0916 14:25:15.784209  758184 pod_ready.go:82] duration metric: took 3.921037ms for pod "etcd-test-preload-848370" in "kube-system" namespace to be "Ready" ...
	E0916 14:25:15.784221  758184 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-848370" hosting pod "etcd-test-preload-848370" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-848370" has status "Ready":"False"
	I0916 14:25:15.784226  758184 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-848370" in "kube-system" namespace to be "Ready" ...
	I0916 14:25:15.788406  758184 pod_ready.go:98] node "test-preload-848370" hosting pod "kube-apiserver-test-preload-848370" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-848370" has status "Ready":"False"
	I0916 14:25:15.788423  758184 pod_ready.go:82] duration metric: took 4.187118ms for pod "kube-apiserver-test-preload-848370" in "kube-system" namespace to be "Ready" ...
	E0916 14:25:15.788431  758184 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-848370" hosting pod "kube-apiserver-test-preload-848370" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-848370" has status "Ready":"False"
	I0916 14:25:15.788436  758184 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-848370" in "kube-system" namespace to be "Ready" ...
	I0916 14:25:15.797864  758184 pod_ready.go:98] node "test-preload-848370" hosting pod "kube-controller-manager-test-preload-848370" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-848370" has status "Ready":"False"
	I0916 14:25:15.797892  758184 pod_ready.go:82] duration metric: took 9.447563ms for pod "kube-controller-manager-test-preload-848370" in "kube-system" namespace to be "Ready" ...
	E0916 14:25:15.797905  758184 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-848370" hosting pod "kube-controller-manager-test-preload-848370" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-848370" has status "Ready":"False"
	I0916 14:25:15.797912  758184 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xf7j8" in "kube-system" namespace to be "Ready" ...
	I0916 14:25:16.169770  758184 pod_ready.go:98] node "test-preload-848370" hosting pod "kube-proxy-xf7j8" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-848370" has status "Ready":"False"
	I0916 14:25:16.169798  758184 pod_ready.go:82] duration metric: took 371.872455ms for pod "kube-proxy-xf7j8" in "kube-system" namespace to be "Ready" ...
	E0916 14:25:16.169807  758184 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-848370" hosting pod "kube-proxy-xf7j8" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-848370" has status "Ready":"False"
	I0916 14:25:16.169814  758184 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-848370" in "kube-system" namespace to be "Ready" ...
	I0916 14:25:16.570747  758184 pod_ready.go:98] node "test-preload-848370" hosting pod "kube-scheduler-test-preload-848370" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-848370" has status "Ready":"False"
	I0916 14:25:16.570773  758184 pod_ready.go:82] duration metric: took 400.952881ms for pod "kube-scheduler-test-preload-848370" in "kube-system" namespace to be "Ready" ...
	E0916 14:25:16.570784  758184 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-848370" hosting pod "kube-scheduler-test-preload-848370" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-848370" has status "Ready":"False"
	I0916 14:25:16.570790  758184 pod_ready.go:39] duration metric: took 799.848007ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 14:25:16.570809  758184 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 14:25:16.582923  758184 ops.go:34] apiserver oom_adj: -16
	I0916 14:25:16.582951  758184 kubeadm.go:597] duration metric: took 12.727915788s to restartPrimaryControlPlane
	I0916 14:25:16.582963  758184 kubeadm.go:394] duration metric: took 12.784874631s to StartCluster
	I0916 14:25:16.582985  758184 settings.go:142] acquiring lock: {Name:mka9d51f09298db6ba9006267d9a91b0a28fad59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 14:25:16.583065  758184 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19652-713072/kubeconfig
	I0916 14:25:16.583676  758184 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/kubeconfig: {Name:mk84449075783d20927a7d708361081f8c4a2b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 14:25:16.583916  758184 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 14:25:16.583987  758184 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 14:25:16.584113  758184 addons.go:69] Setting storage-provisioner=true in profile "test-preload-848370"
	I0916 14:25:16.584139  758184 addons.go:234] Setting addon storage-provisioner=true in "test-preload-848370"
	I0916 14:25:16.584137  758184 config.go:182] Loaded profile config "test-preload-848370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	W0916 14:25:16.584147  758184 addons.go:243] addon storage-provisioner should already be in state true
	I0916 14:25:16.584150  758184 addons.go:69] Setting default-storageclass=true in profile "test-preload-848370"
	I0916 14:25:16.584167  758184 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-848370"
	I0916 14:25:16.584184  758184 host.go:66] Checking if "test-preload-848370" exists ...
	I0916 14:25:16.584491  758184 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 14:25:16.584509  758184 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 14:25:16.584535  758184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 14:25:16.584551  758184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 14:25:16.585749  758184 out.go:177] * Verifying Kubernetes components...
	I0916 14:25:16.587016  758184 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 14:25:16.599763  758184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41641
	I0916 14:25:16.600214  758184 main.go:141] libmachine: () Calling .GetVersion
	I0916 14:25:16.600750  758184 main.go:141] libmachine: Using API Version  1
	I0916 14:25:16.600774  758184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 14:25:16.600829  758184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36663
	I0916 14:25:16.601260  758184 main.go:141] libmachine: () Calling .GetMachineName
	I0916 14:25:16.601275  758184 main.go:141] libmachine: () Calling .GetVersion
	I0916 14:25:16.601463  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetState
	I0916 14:25:16.601739  758184 main.go:141] libmachine: Using API Version  1
	I0916 14:25:16.601772  758184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 14:25:16.602109  758184 main.go:141] libmachine: () Calling .GetMachineName
	I0916 14:25:16.602715  758184 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 14:25:16.602757  758184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 14:25:16.604207  758184 kapi.go:59] client config for test-preload-848370: &rest.Config{Host:"https://192.168.39.56:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19652-713072/.minikube/profiles/test-preload-848370/client.crt", KeyFile:"/home/jenkins/minikube-integration/19652-713072/.minikube/profiles/test-preload-848370/client.key", CAFile:"/home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 14:25:16.604600  758184 addons.go:234] Setting addon default-storageclass=true in "test-preload-848370"
	W0916 14:25:16.604622  758184 addons.go:243] addon default-storageclass should already be in state true
	I0916 14:25:16.604653  758184 host.go:66] Checking if "test-preload-848370" exists ...
	I0916 14:25:16.605034  758184 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 14:25:16.605083  758184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 14:25:16.618749  758184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34381
	I0916 14:25:16.619197  758184 main.go:141] libmachine: () Calling .GetVersion
	I0916 14:25:16.619645  758184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38565
	I0916 14:25:16.619746  758184 main.go:141] libmachine: Using API Version  1
	I0916 14:25:16.619777  758184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 14:25:16.619991  758184 main.go:141] libmachine: () Calling .GetVersion
	I0916 14:25:16.620088  758184 main.go:141] libmachine: () Calling .GetMachineName
	I0916 14:25:16.620261  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetState
	I0916 14:25:16.620447  758184 main.go:141] libmachine: Using API Version  1
	I0916 14:25:16.620468  758184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 14:25:16.620822  758184 main.go:141] libmachine: () Calling .GetMachineName
	I0916 14:25:16.621426  758184 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 14:25:16.621472  758184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 14:25:16.621977  758184 main.go:141] libmachine: (test-preload-848370) Calling .DriverName
	I0916 14:25:16.624090  758184 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 14:25:16.625414  758184 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 14:25:16.625433  758184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 14:25:16.625450  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHHostname
	I0916 14:25:16.628339  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:25:16.628995  758184 main.go:141] libmachine: (test-preload-848370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:25:54", ip: ""} in network mk-test-preload-848370: {Iface:virbr1 ExpiryTime:2024-09-16 15:24:39 +0000 UTC Type:0 Mac:52:54:00:f9:25:54 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-848370 Clientid:01:52:54:00:f9:25:54}
	I0916 14:25:16.629017  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined IP address 192.168.39.56 and MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:25:16.629255  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHPort
	I0916 14:25:16.629427  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHKeyPath
	I0916 14:25:16.629576  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHUsername
	I0916 14:25:16.629755  758184 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/test-preload-848370/id_rsa Username:docker}
	I0916 14:25:16.657895  758184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36693
	I0916 14:25:16.658359  758184 main.go:141] libmachine: () Calling .GetVersion
	I0916 14:25:16.658973  758184 main.go:141] libmachine: Using API Version  1
	I0916 14:25:16.658994  758184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 14:25:16.659361  758184 main.go:141] libmachine: () Calling .GetMachineName
	I0916 14:25:16.659532  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetState
	I0916 14:25:16.661078  758184 main.go:141] libmachine: (test-preload-848370) Calling .DriverName
	I0916 14:25:16.661331  758184 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 14:25:16.661351  758184 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 14:25:16.661371  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHHostname
	I0916 14:25:16.664280  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:25:16.664716  758184 main.go:141] libmachine: (test-preload-848370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:25:54", ip: ""} in network mk-test-preload-848370: {Iface:virbr1 ExpiryTime:2024-09-16 15:24:39 +0000 UTC Type:0 Mac:52:54:00:f9:25:54 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-848370 Clientid:01:52:54:00:f9:25:54}
	I0916 14:25:16.664739  758184 main.go:141] libmachine: (test-preload-848370) DBG | domain test-preload-848370 has defined IP address 192.168.39.56 and MAC address 52:54:00:f9:25:54 in network mk-test-preload-848370
	I0916 14:25:16.664936  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHPort
	I0916 14:25:16.665118  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHKeyPath
	I0916 14:25:16.665250  758184 main.go:141] libmachine: (test-preload-848370) Calling .GetSSHUsername
	I0916 14:25:16.665364  758184 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/test-preload-848370/id_rsa Username:docker}
	I0916 14:25:16.759883  758184 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 14:25:16.776558  758184 node_ready.go:35] waiting up to 6m0s for node "test-preload-848370" to be "Ready" ...
	I0916 14:25:16.849892  758184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 14:25:16.864960  758184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 14:25:17.823260  758184 main.go:141] libmachine: Making call to close driver server
	I0916 14:25:17.823290  758184 main.go:141] libmachine: (test-preload-848370) Calling .Close
	I0916 14:25:17.823358  758184 main.go:141] libmachine: Making call to close driver server
	I0916 14:25:17.823380  758184 main.go:141] libmachine: (test-preload-848370) Calling .Close
	I0916 14:25:17.823567  758184 main.go:141] libmachine: Successfully made call to close driver server
	I0916 14:25:17.823587  758184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 14:25:17.823595  758184 main.go:141] libmachine: Making call to close driver server
	I0916 14:25:17.823603  758184 main.go:141] libmachine: (test-preload-848370) Calling .Close
	I0916 14:25:17.823634  758184 main.go:141] libmachine: (test-preload-848370) DBG | Closing plugin on server side
	I0916 14:25:17.823680  758184 main.go:141] libmachine: Successfully made call to close driver server
	I0916 14:25:17.823697  758184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 14:25:17.823711  758184 main.go:141] libmachine: Making call to close driver server
	I0916 14:25:17.823723  758184 main.go:141] libmachine: (test-preload-848370) Calling .Close
	I0916 14:25:17.823786  758184 main.go:141] libmachine: Successfully made call to close driver server
	I0916 14:25:17.823798  758184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 14:25:17.823974  758184 main.go:141] libmachine: Successfully made call to close driver server
	I0916 14:25:17.823989  758184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 14:25:17.824013  758184 main.go:141] libmachine: (test-preload-848370) DBG | Closing plugin on server side
	I0916 14:25:17.831583  758184 main.go:141] libmachine: Making call to close driver server
	I0916 14:25:17.831611  758184 main.go:141] libmachine: (test-preload-848370) Calling .Close
	I0916 14:25:17.831828  758184 main.go:141] libmachine: Successfully made call to close driver server
	I0916 14:25:17.831841  758184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 14:25:17.833729  758184 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 14:25:17.834857  758184 addons.go:510] duration metric: took 1.250881574s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 14:25:18.780579  758184 node_ready.go:53] node "test-preload-848370" has status "Ready":"False"
	I0916 14:25:20.781185  758184 node_ready.go:49] node "test-preload-848370" has status "Ready":"True"
	I0916 14:25:20.781213  758184 node_ready.go:38] duration metric: took 4.004614859s for node "test-preload-848370" to be "Ready" ...
	I0916 14:25:20.781225  758184 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 14:25:20.786599  758184 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-hkfld" in "kube-system" namespace to be "Ready" ...
	I0916 14:25:20.791103  758184 pod_ready.go:93] pod "coredns-6d4b75cb6d-hkfld" in "kube-system" namespace has status "Ready":"True"
	I0916 14:25:20.791120  758184 pod_ready.go:82] duration metric: took 4.492805ms for pod "coredns-6d4b75cb6d-hkfld" in "kube-system" namespace to be "Ready" ...
	I0916 14:25:20.791128  758184 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-848370" in "kube-system" namespace to be "Ready" ...
	I0916 14:25:22.802124  758184 pod_ready.go:103] pod "etcd-test-preload-848370" in "kube-system" namespace has status "Ready":"False"
	I0916 14:25:25.297657  758184 pod_ready.go:103] pod "etcd-test-preload-848370" in "kube-system" namespace has status "Ready":"False"
	I0916 14:25:25.799131  758184 pod_ready.go:93] pod "etcd-test-preload-848370" in "kube-system" namespace has status "Ready":"True"
	I0916 14:25:25.799162  758184 pod_ready.go:82] duration metric: took 5.008026726s for pod "etcd-test-preload-848370" in "kube-system" namespace to be "Ready" ...
	I0916 14:25:25.799176  758184 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-848370" in "kube-system" namespace to be "Ready" ...
	I0916 14:25:25.804129  758184 pod_ready.go:93] pod "kube-apiserver-test-preload-848370" in "kube-system" namespace has status "Ready":"True"
	I0916 14:25:25.804151  758184 pod_ready.go:82] duration metric: took 4.96658ms for pod "kube-apiserver-test-preload-848370" in "kube-system" namespace to be "Ready" ...
	I0916 14:25:25.804163  758184 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-848370" in "kube-system" namespace to be "Ready" ...
	I0916 14:25:25.808624  758184 pod_ready.go:93] pod "kube-controller-manager-test-preload-848370" in "kube-system" namespace has status "Ready":"True"
	I0916 14:25:25.808640  758184 pod_ready.go:82] duration metric: took 4.469269ms for pod "kube-controller-manager-test-preload-848370" in "kube-system" namespace to be "Ready" ...
	I0916 14:25:25.808648  758184 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xf7j8" in "kube-system" namespace to be "Ready" ...
	I0916 14:25:25.812453  758184 pod_ready.go:93] pod "kube-proxy-xf7j8" in "kube-system" namespace has status "Ready":"True"
	I0916 14:25:25.812467  758184 pod_ready.go:82] duration metric: took 3.813338ms for pod "kube-proxy-xf7j8" in "kube-system" namespace to be "Ready" ...
	I0916 14:25:25.812475  758184 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-848370" in "kube-system" namespace to be "Ready" ...
	I0916 14:25:25.817318  758184 pod_ready.go:93] pod "kube-scheduler-test-preload-848370" in "kube-system" namespace has status "Ready":"True"
	I0916 14:25:25.817344  758184 pod_ready.go:82] duration metric: took 4.855365ms for pod "kube-scheduler-test-preload-848370" in "kube-system" namespace to be "Ready" ...
	I0916 14:25:25.817354  758184 pod_ready.go:39] duration metric: took 5.036115504s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 14:25:25.817371  758184 api_server.go:52] waiting for apiserver process to appear ...
	I0916 14:25:25.817429  758184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 14:25:25.832176  758184 api_server.go:72] duration metric: took 9.24822831s to wait for apiserver process to appear ...
	I0916 14:25:25.832198  758184 api_server.go:88] waiting for apiserver healthz status ...
	I0916 14:25:25.832214  758184 api_server.go:253] Checking apiserver healthz at https://192.168.39.56:8443/healthz ...
	I0916 14:25:25.837786  758184 api_server.go:279] https://192.168.39.56:8443/healthz returned 200:
	ok
	I0916 14:25:25.838908  758184 api_server.go:141] control plane version: v1.24.4
	I0916 14:25:25.838934  758184 api_server.go:131] duration metric: took 6.729081ms to wait for apiserver health ...
	I0916 14:25:25.838944  758184 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 14:25:25.998013  758184 system_pods.go:59] 7 kube-system pods found
	I0916 14:25:25.998043  758184 system_pods.go:61] "coredns-6d4b75cb6d-hkfld" [3b37f443-52a5-4b8f-a4bd-df007d09bb2b] Running
	I0916 14:25:25.998048  758184 system_pods.go:61] "etcd-test-preload-848370" [5c85589c-a130-4a22-8184-dac1d800e465] Running
	I0916 14:25:25.998052  758184 system_pods.go:61] "kube-apiserver-test-preload-848370" [3996277d-7f0f-443a-b7ce-fca3cf842130] Running
	I0916 14:25:25.998056  758184 system_pods.go:61] "kube-controller-manager-test-preload-848370" [4cecc937-4ccf-439d-a731-d213ae49af58] Running
	I0916 14:25:25.998059  758184 system_pods.go:61] "kube-proxy-xf7j8" [4a26ac24-029f-4ed0-bb7e-3414e73ebd7b] Running
	I0916 14:25:25.998063  758184 system_pods.go:61] "kube-scheduler-test-preload-848370" [62860887-7958-47ce-8758-cce99d9d9868] Running
	I0916 14:25:25.998066  758184 system_pods.go:61] "storage-provisioner" [2c23dceb-45b1-4cad-9ff8-90ad88ba4de9] Running
	I0916 14:25:25.998072  758184 system_pods.go:74] duration metric: took 159.121319ms to wait for pod list to return data ...
	I0916 14:25:25.998079  758184 default_sa.go:34] waiting for default service account to be created ...
	I0916 14:25:26.196343  758184 default_sa.go:45] found service account: "default"
	I0916 14:25:26.196375  758184 default_sa.go:55] duration metric: took 198.288392ms for default service account to be created ...
	I0916 14:25:26.196386  758184 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 14:25:26.398614  758184 system_pods.go:86] 7 kube-system pods found
	I0916 14:25:26.398643  758184 system_pods.go:89] "coredns-6d4b75cb6d-hkfld" [3b37f443-52a5-4b8f-a4bd-df007d09bb2b] Running
	I0916 14:25:26.398649  758184 system_pods.go:89] "etcd-test-preload-848370" [5c85589c-a130-4a22-8184-dac1d800e465] Running
	I0916 14:25:26.398653  758184 system_pods.go:89] "kube-apiserver-test-preload-848370" [3996277d-7f0f-443a-b7ce-fca3cf842130] Running
	I0916 14:25:26.398656  758184 system_pods.go:89] "kube-controller-manager-test-preload-848370" [4cecc937-4ccf-439d-a731-d213ae49af58] Running
	I0916 14:25:26.398659  758184 system_pods.go:89] "kube-proxy-xf7j8" [4a26ac24-029f-4ed0-bb7e-3414e73ebd7b] Running
	I0916 14:25:26.398662  758184 system_pods.go:89] "kube-scheduler-test-preload-848370" [62860887-7958-47ce-8758-cce99d9d9868] Running
	I0916 14:25:26.398665  758184 system_pods.go:89] "storage-provisioner" [2c23dceb-45b1-4cad-9ff8-90ad88ba4de9] Running
	I0916 14:25:26.398672  758184 system_pods.go:126] duration metric: took 202.280138ms to wait for k8s-apps to be running ...
	I0916 14:25:26.398679  758184 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 14:25:26.398728  758184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 14:25:26.412770  758184 system_svc.go:56] duration metric: took 14.076577ms WaitForService to wait for kubelet
	I0916 14:25:26.412823  758184 kubeadm.go:582] duration metric: took 9.828878367s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 14:25:26.412844  758184 node_conditions.go:102] verifying NodePressure condition ...
	I0916 14:25:26.595471  758184 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 14:25:26.595497  758184 node_conditions.go:123] node cpu capacity is 2
	I0916 14:25:26.595514  758184 node_conditions.go:105] duration metric: took 182.659023ms to run NodePressure ...
	I0916 14:25:26.595526  758184 start.go:241] waiting for startup goroutines ...
	I0916 14:25:26.595532  758184 start.go:246] waiting for cluster config update ...
	I0916 14:25:26.595542  758184 start.go:255] writing updated cluster config ...
	I0916 14:25:26.595803  758184 ssh_runner.go:195] Run: rm -f paused
	I0916 14:25:26.646783  758184 start.go:600] kubectl: 1.31.0, cluster: 1.24.4 (minor skew: 7)
	I0916 14:25:26.648480  758184 out.go:201] 
	W0916 14:25:26.649497  758184 out.go:270] ! /usr/local/bin/kubectl is version 1.31.0, which may have incompatibilities with Kubernetes 1.24.4.
	I0916 14:25:26.650482  758184 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0916 14:25:26.651498  758184 out.go:177] * Done! kubectl is now configured to use "test-preload-848370" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 16 14:25:27 test-preload-848370 crio[661]: time="2024-09-16 14:25:27.501395326Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496727501368136,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ddbd48e-4567-4a33-9a05-9103e4a3b112 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:25:27 test-preload-848370 crio[661]: time="2024-09-16 14:25:27.505703779Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=623f6abf-8d1f-4c86-800a-2c1008a1b47a name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:25:27 test-preload-848370 crio[661]: time="2024-09-16 14:25:27.505764960Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=623f6abf-8d1f-4c86-800a-2c1008a1b47a name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:25:27 test-preload-848370 crio[661]: time="2024-09-16 14:25:27.505912902Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f89c8a33ad8ec07ba3c2045d071d30975d324d7d8079fdc6a2abf72ab177b513,PodSandboxId:5a4d9490b29d358b17b9b0d0c8f6a3b4fbc64f3b67d09319535d6c48afc9232f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726496718394922491,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-hkfld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b37f443-52a5-4b8f-a4bd-df007d09bb2b,},Annotations:map[string]string{io.kubernetes.container.hash: a8d4dcee,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d044190793dd48ee5cfe10b24544c52158dae246b79c556618c2e5f070143eb,PodSandboxId:5eddba57867074c107ff83faf9396e4eb73969986de2fcb882bf1ea9345eae89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726496711346288356,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 2c23dceb-45b1-4cad-9ff8-90ad88ba4de9,},Annotations:map[string]string{io.kubernetes.container.hash: 7c62760e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b04af3c64772ec4213293fd326b195b29763820c2c899f5fdaacb892a4478677,PodSandboxId:cbbbb423359e3d6184592c27ba5c8508ca570bdbebd5183bfd79ed8e8db8c235,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726496710944223032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xf7j8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a
26ac24-029f-4ed0-bb7e-3414e73ebd7b,},Annotations:map[string]string{io.kubernetes.container.hash: 141c128,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97ffb50817248cad8502a4a91de096cf41e67b603b5fac81df23afc9736697c,PodSandboxId:0946576af202da454ba756ba08db13fe347b137f5e085ca9b1539c1915e24c1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726496706043937451,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-848370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2cd2075c1
7e13d0d58efa6d6ec511d0,},Annotations:map[string]string{io.kubernetes.container.hash: 58f03083,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537e896b0e395609e752309d8fa4af854bfebff3d916889e58576a8531f68ffa,PodSandboxId:27ba88748a34df7c85d8211f05c72298cdb210c70c216fd35792f55eeef70cb6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726496706006159189,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-848370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b33837e682db639da8dfe
3467809e9ef,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f73f456d60f1a752ebde83c758aff28a174887b7a09be7b75451f36828abda6,PodSandboxId:bd0161fc2aaf8109653e906147be6ca963a66cc48b56bf8c1a612a1f63c40abf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726496705965681793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-848370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac41a53971658775bb885562588036f2,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: c0434f85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6404b4d7648c4f5a62385de38163b9e6856f67b4e6e775ee026440f903dbfa79,PodSandboxId:37737efbdb2d3602bf7b04b701d6c56a1ae7c57fe86cbdb14360db704ffd8745,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726496705993358483,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-848370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a2a449bdd3a1bb62a9630ab76e65a0,},Annotations
:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=623f6abf-8d1f-4c86-800a-2c1008a1b47a name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:25:27 test-preload-848370 crio[661]: time="2024-09-16 14:25:27.541957296Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f5a6ad32-d8aa-415e-ae69-9048f5a892db name=/runtime.v1.RuntimeService/Version
	Sep 16 14:25:27 test-preload-848370 crio[661]: time="2024-09-16 14:25:27.542034348Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f5a6ad32-d8aa-415e-ae69-9048f5a892db name=/runtime.v1.RuntimeService/Version
	Sep 16 14:25:27 test-preload-848370 crio[661]: time="2024-09-16 14:25:27.543392379Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c28fc456-4db1-4d09-8b54-5816686668a2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:25:27 test-preload-848370 crio[661]: time="2024-09-16 14:25:27.544037112Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496727544018416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c28fc456-4db1-4d09-8b54-5816686668a2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:25:27 test-preload-848370 crio[661]: time="2024-09-16 14:25:27.544564836Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1bf747b4-a3d4-422f-a8d4-1475af58e9dc name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:25:27 test-preload-848370 crio[661]: time="2024-09-16 14:25:27.544625792Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1bf747b4-a3d4-422f-a8d4-1475af58e9dc name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:25:27 test-preload-848370 crio[661]: time="2024-09-16 14:25:27.544777229Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f89c8a33ad8ec07ba3c2045d071d30975d324d7d8079fdc6a2abf72ab177b513,PodSandboxId:5a4d9490b29d358b17b9b0d0c8f6a3b4fbc64f3b67d09319535d6c48afc9232f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726496718394922491,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-hkfld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b37f443-52a5-4b8f-a4bd-df007d09bb2b,},Annotations:map[string]string{io.kubernetes.container.hash: a8d4dcee,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d044190793dd48ee5cfe10b24544c52158dae246b79c556618c2e5f070143eb,PodSandboxId:5eddba57867074c107ff83faf9396e4eb73969986de2fcb882bf1ea9345eae89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726496711346288356,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 2c23dceb-45b1-4cad-9ff8-90ad88ba4de9,},Annotations:map[string]string{io.kubernetes.container.hash: 7c62760e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b04af3c64772ec4213293fd326b195b29763820c2c899f5fdaacb892a4478677,PodSandboxId:cbbbb423359e3d6184592c27ba5c8508ca570bdbebd5183bfd79ed8e8db8c235,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726496710944223032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xf7j8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a
26ac24-029f-4ed0-bb7e-3414e73ebd7b,},Annotations:map[string]string{io.kubernetes.container.hash: 141c128,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97ffb50817248cad8502a4a91de096cf41e67b603b5fac81df23afc9736697c,PodSandboxId:0946576af202da454ba756ba08db13fe347b137f5e085ca9b1539c1915e24c1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726496706043937451,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-848370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2cd2075c1
7e13d0d58efa6d6ec511d0,},Annotations:map[string]string{io.kubernetes.container.hash: 58f03083,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537e896b0e395609e752309d8fa4af854bfebff3d916889e58576a8531f68ffa,PodSandboxId:27ba88748a34df7c85d8211f05c72298cdb210c70c216fd35792f55eeef70cb6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726496706006159189,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-848370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b33837e682db639da8dfe
3467809e9ef,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f73f456d60f1a752ebde83c758aff28a174887b7a09be7b75451f36828abda6,PodSandboxId:bd0161fc2aaf8109653e906147be6ca963a66cc48b56bf8c1a612a1f63c40abf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726496705965681793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-848370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac41a53971658775bb885562588036f2,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: c0434f85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6404b4d7648c4f5a62385de38163b9e6856f67b4e6e775ee026440f903dbfa79,PodSandboxId:37737efbdb2d3602bf7b04b701d6c56a1ae7c57fe86cbdb14360db704ffd8745,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726496705993358483,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-848370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a2a449bdd3a1bb62a9630ab76e65a0,},Annotations
:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1bf747b4-a3d4-422f-a8d4-1475af58e9dc name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:25:27 test-preload-848370 crio[661]: time="2024-09-16 14:25:27.580580697Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dbb6b121-756b-41c2-9eba-f60fb8dac15d name=/runtime.v1.RuntimeService/Version
	Sep 16 14:25:27 test-preload-848370 crio[661]: time="2024-09-16 14:25:27.580692341Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dbb6b121-756b-41c2-9eba-f60fb8dac15d name=/runtime.v1.RuntimeService/Version
	Sep 16 14:25:27 test-preload-848370 crio[661]: time="2024-09-16 14:25:27.581676908Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9ca2dd07-1d32-47b8-8c4b-de6905373f98 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:25:27 test-preload-848370 crio[661]: time="2024-09-16 14:25:27.582078861Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496727582060010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9ca2dd07-1d32-47b8-8c4b-de6905373f98 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:25:27 test-preload-848370 crio[661]: time="2024-09-16 14:25:27.582619569Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fbaf3b70-bbdd-4f10-8f61-22b6b387ef1b name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:25:27 test-preload-848370 crio[661]: time="2024-09-16 14:25:27.582667506Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fbaf3b70-bbdd-4f10-8f61-22b6b387ef1b name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:25:27 test-preload-848370 crio[661]: time="2024-09-16 14:25:27.582840741Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f89c8a33ad8ec07ba3c2045d071d30975d324d7d8079fdc6a2abf72ab177b513,PodSandboxId:5a4d9490b29d358b17b9b0d0c8f6a3b4fbc64f3b67d09319535d6c48afc9232f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726496718394922491,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-hkfld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b37f443-52a5-4b8f-a4bd-df007d09bb2b,},Annotations:map[string]string{io.kubernetes.container.hash: a8d4dcee,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d044190793dd48ee5cfe10b24544c52158dae246b79c556618c2e5f070143eb,PodSandboxId:5eddba57867074c107ff83faf9396e4eb73969986de2fcb882bf1ea9345eae89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726496711346288356,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 2c23dceb-45b1-4cad-9ff8-90ad88ba4de9,},Annotations:map[string]string{io.kubernetes.container.hash: 7c62760e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b04af3c64772ec4213293fd326b195b29763820c2c899f5fdaacb892a4478677,PodSandboxId:cbbbb423359e3d6184592c27ba5c8508ca570bdbebd5183bfd79ed8e8db8c235,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726496710944223032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xf7j8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a
26ac24-029f-4ed0-bb7e-3414e73ebd7b,},Annotations:map[string]string{io.kubernetes.container.hash: 141c128,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97ffb50817248cad8502a4a91de096cf41e67b603b5fac81df23afc9736697c,PodSandboxId:0946576af202da454ba756ba08db13fe347b137f5e085ca9b1539c1915e24c1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726496706043937451,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-848370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2cd2075c1
7e13d0d58efa6d6ec511d0,},Annotations:map[string]string{io.kubernetes.container.hash: 58f03083,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537e896b0e395609e752309d8fa4af854bfebff3d916889e58576a8531f68ffa,PodSandboxId:27ba88748a34df7c85d8211f05c72298cdb210c70c216fd35792f55eeef70cb6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726496706006159189,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-848370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b33837e682db639da8dfe
3467809e9ef,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f73f456d60f1a752ebde83c758aff28a174887b7a09be7b75451f36828abda6,PodSandboxId:bd0161fc2aaf8109653e906147be6ca963a66cc48b56bf8c1a612a1f63c40abf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726496705965681793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-848370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac41a53971658775bb885562588036f2,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: c0434f85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6404b4d7648c4f5a62385de38163b9e6856f67b4e6e775ee026440f903dbfa79,PodSandboxId:37737efbdb2d3602bf7b04b701d6c56a1ae7c57fe86cbdb14360db704ffd8745,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726496705993358483,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-848370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a2a449bdd3a1bb62a9630ab76e65a0,},Annotations
:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fbaf3b70-bbdd-4f10-8f61-22b6b387ef1b name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:25:27 test-preload-848370 crio[661]: time="2024-09-16 14:25:27.616237586Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=daf23120-356c-4bec-bbb0-f9c28fca06aa name=/runtime.v1.RuntimeService/Version
	Sep 16 14:25:27 test-preload-848370 crio[661]: time="2024-09-16 14:25:27.616295733Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=daf23120-356c-4bec-bbb0-f9c28fca06aa name=/runtime.v1.RuntimeService/Version
	Sep 16 14:25:27 test-preload-848370 crio[661]: time="2024-09-16 14:25:27.623850650Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dd4790b3-07b0-45db-b277-3ffbebaa0a13 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:25:27 test-preload-848370 crio[661]: time="2024-09-16 14:25:27.624249508Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726496727624231249,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd4790b3-07b0-45db-b277-3ffbebaa0a13 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:25:27 test-preload-848370 crio[661]: time="2024-09-16 14:25:27.624970527Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1d1ad7ee-8fdc-465a-b926-b7b2b67f75e0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:25:27 test-preload-848370 crio[661]: time="2024-09-16 14:25:27.625025912Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1d1ad7ee-8fdc-465a-b926-b7b2b67f75e0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:25:27 test-preload-848370 crio[661]: time="2024-09-16 14:25:27.625193513Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f89c8a33ad8ec07ba3c2045d071d30975d324d7d8079fdc6a2abf72ab177b513,PodSandboxId:5a4d9490b29d358b17b9b0d0c8f6a3b4fbc64f3b67d09319535d6c48afc9232f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726496718394922491,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-hkfld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b37f443-52a5-4b8f-a4bd-df007d09bb2b,},Annotations:map[string]string{io.kubernetes.container.hash: a8d4dcee,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d044190793dd48ee5cfe10b24544c52158dae246b79c556618c2e5f070143eb,PodSandboxId:5eddba57867074c107ff83faf9396e4eb73969986de2fcb882bf1ea9345eae89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726496711346288356,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 2c23dceb-45b1-4cad-9ff8-90ad88ba4de9,},Annotations:map[string]string{io.kubernetes.container.hash: 7c62760e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b04af3c64772ec4213293fd326b195b29763820c2c899f5fdaacb892a4478677,PodSandboxId:cbbbb423359e3d6184592c27ba5c8508ca570bdbebd5183bfd79ed8e8db8c235,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726496710944223032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xf7j8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a
26ac24-029f-4ed0-bb7e-3414e73ebd7b,},Annotations:map[string]string{io.kubernetes.container.hash: 141c128,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97ffb50817248cad8502a4a91de096cf41e67b603b5fac81df23afc9736697c,PodSandboxId:0946576af202da454ba756ba08db13fe347b137f5e085ca9b1539c1915e24c1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726496706043937451,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-848370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2cd2075c1
7e13d0d58efa6d6ec511d0,},Annotations:map[string]string{io.kubernetes.container.hash: 58f03083,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537e896b0e395609e752309d8fa4af854bfebff3d916889e58576a8531f68ffa,PodSandboxId:27ba88748a34df7c85d8211f05c72298cdb210c70c216fd35792f55eeef70cb6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726496706006159189,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-848370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b33837e682db639da8dfe
3467809e9ef,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f73f456d60f1a752ebde83c758aff28a174887b7a09be7b75451f36828abda6,PodSandboxId:bd0161fc2aaf8109653e906147be6ca963a66cc48b56bf8c1a612a1f63c40abf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726496705965681793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-848370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac41a53971658775bb885562588036f2,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: c0434f85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6404b4d7648c4f5a62385de38163b9e6856f67b4e6e775ee026440f903dbfa79,PodSandboxId:37737efbdb2d3602bf7b04b701d6c56a1ae7c57fe86cbdb14360db704ffd8745,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726496705993358483,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-848370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a2a449bdd3a1bb62a9630ab76e65a0,},Annotations
:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1d1ad7ee-8fdc-465a-b926-b7b2b67f75e0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f89c8a33ad8ec       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   9 seconds ago       Running             coredns                   1                   5a4d9490b29d3       coredns-6d4b75cb6d-hkfld
	0d044190793dd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   5eddba5786707       storage-provisioner
	b04af3c64772e       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   16 seconds ago      Running             kube-proxy                1                   cbbbb423359e3       kube-proxy-xf7j8
	d97ffb5081724       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   0946576af202d       kube-apiserver-test-preload-848370
	537e896b0e395       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   27ba88748a34d       kube-scheduler-test-preload-848370
	6404b4d7648c4       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   37737efbdb2d3       kube-controller-manager-test-preload-848370
	4f73f456d60f1       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   bd0161fc2aaf8       etcd-test-preload-848370
	
	
	==> coredns [f89c8a33ad8ec07ba3c2045d071d30975d324d7d8079fdc6a2abf72ab177b513] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:56261 - 8552 "HINFO IN 4904069150489900743.5330170530199140931. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010875793s
	
	
	==> describe nodes <==
	Name:               test-preload-848370
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-848370
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=395d984f3991a068de8332d2cc8eeea965525b86
	                    minikube.k8s.io/name=test-preload-848370
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T14_23_56_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 14:23:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-848370
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 14:25:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 14:25:20 +0000   Mon, 16 Sep 2024 14:23:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 14:25:20 +0000   Mon, 16 Sep 2024 14:23:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 14:25:20 +0000   Mon, 16 Sep 2024 14:23:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 14:25:20 +0000   Mon, 16 Sep 2024 14:25:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.56
	  Hostname:    test-preload-848370
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2519664844144608a07a74ae51db1d58
	  System UUID:                25196648-4414-4608-a07a-74ae51db1d58
	  Boot ID:                    d31d4d50-0747-4e87-b0c8-d892b64a7daf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-hkfld                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     78s
	  kube-system                 etcd-test-preload-848370                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         91s
	  kube-system                 kube-apiserver-test-preload-848370             250m (12%)    0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-controller-manager-test-preload-848370    200m (10%)    0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-proxy-xf7j8                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-scheduler-test-preload-848370             100m (5%)     0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16s                kube-proxy       
	  Normal  Starting                 77s                kube-proxy       
	  Normal  NodeHasSufficientMemory  99s (x5 over 99s)  kubelet          Node test-preload-848370 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s (x5 over 99s)  kubelet          Node test-preload-848370 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s (x4 over 99s)  kubelet          Node test-preload-848370 status is now: NodeHasSufficientPID
	  Normal  Starting                 92s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  91s                kubelet          Node test-preload-848370 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    91s                kubelet          Node test-preload-848370 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     91s                kubelet          Node test-preload-848370 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  91s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                81s                kubelet          Node test-preload-848370 status is now: NodeReady
	  Normal  RegisteredNode           79s                node-controller  Node test-preload-848370 event: Registered Node test-preload-848370 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node test-preload-848370 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node test-preload-848370 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node test-preload-848370 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                 node-controller  Node test-preload-848370 event: Registered Node test-preload-848370 in Controller
	
	
	==> dmesg <==
	[Sep16 14:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050238] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039136] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.759579] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.375883] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.580695] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.693518] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.052904] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.049330] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.188838] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.124863] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.278860] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[Sep16 14:25] systemd-fstab-generator[980]: Ignoring "noauto" option for root device
	[  +0.061510] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.781735] systemd-fstab-generator[1110]: Ignoring "noauto" option for root device
	[  +5.021907] kauditd_printk_skb: 105 callbacks suppressed
	[  +6.572077] systemd-fstab-generator[1731]: Ignoring "noauto" option for root device
	[  +0.100007] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.942022] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [4f73f456d60f1a752ebde83c758aff28a174887b7a09be7b75451f36828abda6] <==
	{"level":"info","ts":"2024-09-16T14:25:06.398Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"be139f16c87a8e87","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-16T14:25:06.399Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-16T14:25:06.400Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be139f16c87a8e87 switched to configuration voters=(13696465811965382279)"}
	{"level":"info","ts":"2024-09-16T14:25:06.400Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7fd3c3974c415d44","local-member-id":"be139f16c87a8e87","added-peer-id":"be139f16c87a8e87","added-peer-peer-urls":["https://192.168.39.56:2380"]}
	{"level":"info","ts":"2024-09-16T14:25:06.400Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7fd3c3974c415d44","local-member-id":"be139f16c87a8e87","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T14:25:06.401Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T14:25:06.425Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T14:25:06.430Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.56:2380"}
	{"level":"info","ts":"2024-09-16T14:25:06.430Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.56:2380"}
	{"level":"info","ts":"2024-09-16T14:25:06.433Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"be139f16c87a8e87","initial-advertise-peer-urls":["https://192.168.39.56:2380"],"listen-peer-urls":["https://192.168.39.56:2380"],"advertise-client-urls":["https://192.168.39.56:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.56:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T14:25:06.433Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T14:25:07.770Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be139f16c87a8e87 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T14:25:07.771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be139f16c87a8e87 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T14:25:07.771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be139f16c87a8e87 received MsgPreVoteResp from be139f16c87a8e87 at term 2"}
	{"level":"info","ts":"2024-09-16T14:25:07.771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be139f16c87a8e87 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T14:25:07.771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be139f16c87a8e87 received MsgVoteResp from be139f16c87a8e87 at term 3"}
	{"level":"info","ts":"2024-09-16T14:25:07.771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be139f16c87a8e87 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T14:25:07.771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: be139f16c87a8e87 elected leader be139f16c87a8e87 at term 3"}
	{"level":"info","ts":"2024-09-16T14:25:07.775Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"be139f16c87a8e87","local-member-attributes":"{Name:test-preload-848370 ClientURLs:[https://192.168.39.56:2379]}","request-path":"/0/members/be139f16c87a8e87/attributes","cluster-id":"7fd3c3974c415d44","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T14:25:07.776Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T14:25:07.776Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T14:25:07.777Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T14:25:07.787Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.56:2379"}
	{"level":"info","ts":"2024-09-16T14:25:07.787Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T14:25:07.787Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 14:25:27 up 0 min,  0 users,  load average: 0.89, 0.25, 0.08
	Linux test-preload-848370 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d97ffb50817248cad8502a4a91de096cf41e67b603b5fac81df23afc9736697c] <==
	I0916 14:25:10.133376       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0916 14:25:10.133416       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0916 14:25:10.139273       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0916 14:25:10.139354       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0916 14:25:10.146222       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0916 14:25:10.160939       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 14:25:10.220480       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0916 14:25:10.224802       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0916 14:25:10.229354       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0916 14:25:10.230323       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 14:25:10.237811       1 shared_informer.go:262] Caches are synced for node_authorizer
	E0916 14:25:10.239310       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0916 14:25:10.239542       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0916 14:25:10.294165       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 14:25:10.316044       1 cache.go:39] Caches are synced for autoregister controller
	I0916 14:25:10.789202       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0916 14:25:11.123776       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 14:25:11.284159       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0916 14:25:11.964256       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0916 14:25:11.975835       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0916 14:25:12.008970       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0916 14:25:12.022755       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 14:25:12.027912       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 14:25:22.638399       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 14:25:22.686452       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [6404b4d7648c4f5a62385de38163b9e6856f67b4e6e775ee026440f903dbfa79] <==
	I0916 14:25:22.647861       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0916 14:25:22.652922       1 shared_informer.go:262] Caches are synced for attach detach
	I0916 14:25:22.653456       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0916 14:25:22.653739       1 shared_informer.go:262] Caches are synced for GC
	I0916 14:25:22.657135       1 shared_informer.go:262] Caches are synced for daemon sets
	I0916 14:25:22.659660       1 shared_informer.go:262] Caches are synced for disruption
	I0916 14:25:22.659689       1 disruption.go:371] Sending events to api server.
	I0916 14:25:22.663573       1 shared_informer.go:262] Caches are synced for endpoint
	I0916 14:25:22.705069       1 shared_informer.go:262] Caches are synced for HPA
	I0916 14:25:22.718631       1 shared_informer.go:262] Caches are synced for taint
	I0916 14:25:22.718862       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0916 14:25:22.719136       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0916 14:25:22.719262       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-848370. Assuming now as a timestamp.
	I0916 14:25:22.719304       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0916 14:25:22.719646       1 event.go:294] "Event occurred" object="test-preload-848370" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-848370 event: Registered Node test-preload-848370 in Controller"
	I0916 14:25:22.746911       1 shared_informer.go:262] Caches are synced for resource quota
	I0916 14:25:22.789772       1 shared_informer.go:262] Caches are synced for resource quota
	I0916 14:25:22.815455       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0916 14:25:22.865122       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0916 14:25:22.865193       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0916 14:25:22.865247       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0916 14:25:22.865574       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0916 14:25:23.275282       1 shared_informer.go:262] Caches are synced for garbage collector
	I0916 14:25:23.316609       1 shared_informer.go:262] Caches are synced for garbage collector
	I0916 14:25:23.316645       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [b04af3c64772ec4213293fd326b195b29763820c2c899f5fdaacb892a4478677] <==
	I0916 14:25:11.139604       1 node.go:163] Successfully retrieved node IP: 192.168.39.56
	I0916 14:25:11.139695       1 server_others.go:138] "Detected node IP" address="192.168.39.56"
	I0916 14:25:11.139730       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0916 14:25:11.258729       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0916 14:25:11.258762       1 server_others.go:206] "Using iptables Proxier"
	I0916 14:25:11.261433       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0916 14:25:11.265728       1 server.go:661] "Version info" version="v1.24.4"
	I0916 14:25:11.265814       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 14:25:11.270547       1 config.go:317] "Starting service config controller"
	I0916 14:25:11.274027       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0916 14:25:11.274887       1 config.go:226] "Starting endpoint slice config controller"
	I0916 14:25:11.274945       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0916 14:25:11.276903       1 config.go:444] "Starting node config controller"
	I0916 14:25:11.276930       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0916 14:25:11.375902       1 shared_informer.go:262] Caches are synced for service config
	I0916 14:25:11.376309       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0916 14:25:11.377107       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [537e896b0e395609e752309d8fa4af854bfebff3d916889e58576a8531f68ffa] <==
	I0916 14:25:07.179334       1 serving.go:348] Generated self-signed cert in-memory
	W0916 14:25:10.240585       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 14:25:10.241596       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 14:25:10.241655       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 14:25:10.241686       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 14:25:10.277998       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0916 14:25:10.278046       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 14:25:10.292776       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 14:25:10.292901       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 14:25:10.295820       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0916 14:25:10.295930       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0916 14:25:10.396376       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 14:25:10 test-preload-848370 kubelet[1117]: E0916 14:25:10.244411    1117 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-hkfld" podUID=3b37f443-52a5-4b8f-a4bd-df007d09bb2b
	Sep 16 14:25:10 test-preload-848370 kubelet[1117]: I0916 14:25:10.275670    1117 kubelet_node_status.go:108] "Node was previously registered" node="test-preload-848370"
	Sep 16 14:25:10 test-preload-848370 kubelet[1117]: I0916 14:25:10.275868    1117 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-848370"
	Sep 16 14:25:10 test-preload-848370 kubelet[1117]: I0916 14:25:10.280116    1117 setters.go:532] "Node became not ready" node="test-preload-848370" condition={Type:Ready Status:False LastHeartbeatTime:2024-09-16 14:25:10.280072252 +0000 UTC m=+5.155098381 LastTransitionTime:2024-09-16 14:25:10.280072252 +0000 UTC m=+5.155098381 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Sep 16 14:25:10 test-preload-848370 kubelet[1117]: I0916 14:25:10.314636    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b37f443-52a5-4b8f-a4bd-df007d09bb2b-config-volume\") pod \"coredns-6d4b75cb6d-hkfld\" (UID: \"3b37f443-52a5-4b8f-a4bd-df007d09bb2b\") " pod="kube-system/coredns-6d4b75cb6d-hkfld"
	Sep 16 14:25:10 test-preload-848370 kubelet[1117]: I0916 14:25:10.314743    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr2n7\" (UniqueName: \"kubernetes.io/projected/3b37f443-52a5-4b8f-a4bd-df007d09bb2b-kube-api-access-kr2n7\") pod \"coredns-6d4b75cb6d-hkfld\" (UID: \"3b37f443-52a5-4b8f-a4bd-df007d09bb2b\") " pod="kube-system/coredns-6d4b75cb6d-hkfld"
	Sep 16 14:25:10 test-preload-848370 kubelet[1117]: I0916 14:25:10.314768    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lll5q\" (UniqueName: \"kubernetes.io/projected/2c23dceb-45b1-4cad-9ff8-90ad88ba4de9-kube-api-access-lll5q\") pod \"storage-provisioner\" (UID: \"2c23dceb-45b1-4cad-9ff8-90ad88ba4de9\") " pod="kube-system/storage-provisioner"
	Sep 16 14:25:10 test-preload-848370 kubelet[1117]: I0916 14:25:10.314849    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a26ac24-029f-4ed0-bb7e-3414e73ebd7b-xtables-lock\") pod \"kube-proxy-xf7j8\" (UID: \"4a26ac24-029f-4ed0-bb7e-3414e73ebd7b\") " pod="kube-system/kube-proxy-xf7j8"
	Sep 16 14:25:10 test-preload-848370 kubelet[1117]: I0916 14:25:10.314871    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a26ac24-029f-4ed0-bb7e-3414e73ebd7b-lib-modules\") pod \"kube-proxy-xf7j8\" (UID: \"4a26ac24-029f-4ed0-bb7e-3414e73ebd7b\") " pod="kube-system/kube-proxy-xf7j8"
	Sep 16 14:25:10 test-preload-848370 kubelet[1117]: I0916 14:25:10.314945    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t6cr\" (UniqueName: \"kubernetes.io/projected/4a26ac24-029f-4ed0-bb7e-3414e73ebd7b-kube-api-access-5t6cr\") pod \"kube-proxy-xf7j8\" (UID: \"4a26ac24-029f-4ed0-bb7e-3414e73ebd7b\") " pod="kube-system/kube-proxy-xf7j8"
	Sep 16 14:25:10 test-preload-848370 kubelet[1117]: I0916 14:25:10.314965    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2c23dceb-45b1-4cad-9ff8-90ad88ba4de9-tmp\") pod \"storage-provisioner\" (UID: \"2c23dceb-45b1-4cad-9ff8-90ad88ba4de9\") " pod="kube-system/storage-provisioner"
	Sep 16 14:25:10 test-preload-848370 kubelet[1117]: I0916 14:25:10.315046    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4a26ac24-029f-4ed0-bb7e-3414e73ebd7b-kube-proxy\") pod \"kube-proxy-xf7j8\" (UID: \"4a26ac24-029f-4ed0-bb7e-3414e73ebd7b\") " pod="kube-system/kube-proxy-xf7j8"
	Sep 16 14:25:10 test-preload-848370 kubelet[1117]: I0916 14:25:10.315061    1117 reconciler.go:159] "Reconciler: start to sync state"
	Sep 16 14:25:10 test-preload-848370 kubelet[1117]: E0916 14:25:10.319110    1117 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Sep 16 14:25:10 test-preload-848370 kubelet[1117]: E0916 14:25:10.421057    1117 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 16 14:25:10 test-preload-848370 kubelet[1117]: E0916 14:25:10.421406    1117 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/3b37f443-52a5-4b8f-a4bd-df007d09bb2b-config-volume podName:3b37f443-52a5-4b8f-a4bd-df007d09bb2b nodeName:}" failed. No retries permitted until 2024-09-16 14:25:10.921178428 +0000 UTC m=+5.796204562 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3b37f443-52a5-4b8f-a4bd-df007d09bb2b-config-volume") pod "coredns-6d4b75cb6d-hkfld" (UID: "3b37f443-52a5-4b8f-a4bd-df007d09bb2b") : object "kube-system"/"coredns" not registered
	Sep 16 14:25:10 test-preload-848370 kubelet[1117]: E0916 14:25:10.925108    1117 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 16 14:25:10 test-preload-848370 kubelet[1117]: E0916 14:25:10.925187    1117 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/3b37f443-52a5-4b8f-a4bd-df007d09bb2b-config-volume podName:3b37f443-52a5-4b8f-a4bd-df007d09bb2b nodeName:}" failed. No retries permitted until 2024-09-16 14:25:11.925172995 +0000 UTC m=+6.800199113 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3b37f443-52a5-4b8f-a4bd-df007d09bb2b-config-volume") pod "coredns-6d4b75cb6d-hkfld" (UID: "3b37f443-52a5-4b8f-a4bd-df007d09bb2b") : object "kube-system"/"coredns" not registered
	Sep 16 14:25:11 test-preload-848370 kubelet[1117]: E0916 14:25:11.933439    1117 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 16 14:25:11 test-preload-848370 kubelet[1117]: E0916 14:25:11.933620    1117 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/3b37f443-52a5-4b8f-a4bd-df007d09bb2b-config-volume podName:3b37f443-52a5-4b8f-a4bd-df007d09bb2b nodeName:}" failed. No retries permitted until 2024-09-16 14:25:13.933572342 +0000 UTC m=+8.808598458 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3b37f443-52a5-4b8f-a4bd-df007d09bb2b-config-volume") pod "coredns-6d4b75cb6d-hkfld" (UID: "3b37f443-52a5-4b8f-a4bd-df007d09bb2b") : object "kube-system"/"coredns" not registered
	Sep 16 14:25:12 test-preload-848370 kubelet[1117]: E0916 14:25:12.366707    1117 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-hkfld" podUID=3b37f443-52a5-4b8f-a4bd-df007d09bb2b
	Sep 16 14:25:13 test-preload-848370 kubelet[1117]: I0916 14:25:13.371937    1117 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=f851ec2f-f64c-4e71-a155-a47f296e9808 path="/var/lib/kubelet/pods/f851ec2f-f64c-4e71-a155-a47f296e9808/volumes"
	Sep 16 14:25:13 test-preload-848370 kubelet[1117]: E0916 14:25:13.949322    1117 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 16 14:25:13 test-preload-848370 kubelet[1117]: E0916 14:25:13.949567    1117 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/3b37f443-52a5-4b8f-a4bd-df007d09bb2b-config-volume podName:3b37f443-52a5-4b8f-a4bd-df007d09bb2b nodeName:}" failed. No retries permitted until 2024-09-16 14:25:17.949548614 +0000 UTC m=+12.824574747 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3b37f443-52a5-4b8f-a4bd-df007d09bb2b-config-volume") pod "coredns-6d4b75cb6d-hkfld" (UID: "3b37f443-52a5-4b8f-a4bd-df007d09bb2b") : object "kube-system"/"coredns" not registered
	Sep 16 14:25:14 test-preload-848370 kubelet[1117]: E0916 14:25:14.366343    1117 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-hkfld" podUID=3b37f443-52a5-4b8f-a4bd-df007d09bb2b
	
	
	==> storage-provisioner [0d044190793dd48ee5cfe10b24544c52158dae246b79c556618c2e5f070143eb] <==
	I0916 14:25:11.531969       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 14:25:11.543196       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 14:25:11.543325       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-848370 -n test-preload-848370
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-848370 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-848370" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-848370
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-848370: (1.144747465s)
--- FAIL: TestPreload (166.09s)

                                                
                                    
x
+
TestKubernetesUpgrade (366.18s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-515632 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-515632 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m28.518588469s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-515632] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19652
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19652-713072/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19652-713072/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-515632" primary control-plane node in "kubernetes-upgrade-515632" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 14:27:25.410253  759664 out.go:345] Setting OutFile to fd 1 ...
	I0916 14:27:25.410408  759664 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 14:27:25.410420  759664 out.go:358] Setting ErrFile to fd 2...
	I0916 14:27:25.410429  759664 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 14:27:25.410725  759664 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
	I0916 14:27:25.411460  759664 out.go:352] Setting JSON to false
	I0916 14:27:25.412612  759664 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":14994,"bootTime":1726481851,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 14:27:25.412695  759664 start.go:139] virtualization: kvm guest
	I0916 14:27:25.415094  759664 out.go:177] * [kubernetes-upgrade-515632] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 14:27:25.416512  759664 notify.go:220] Checking for updates...
	I0916 14:27:25.416945  759664 out.go:177]   - MINIKUBE_LOCATION=19652
	I0916 14:27:25.418079  759664 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 14:27:25.419138  759664 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19652-713072/kubeconfig
	I0916 14:27:25.420239  759664 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 14:27:25.422061  759664 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 14:27:25.423942  759664 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 14:27:25.425055  759664 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 14:27:25.461261  759664 out.go:177] * Using the kvm2 driver based on user configuration
	I0916 14:27:25.462427  759664 start.go:297] selected driver: kvm2
	I0916 14:27:25.462443  759664 start.go:901] validating driver "kvm2" against <nil>
	I0916 14:27:25.462458  759664 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 14:27:25.463215  759664 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 14:27:25.463292  759664 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19652-713072/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 14:27:25.479627  759664 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 14:27:25.479689  759664 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 14:27:25.480001  759664 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 14:27:25.480043  759664 cni.go:84] Creating CNI manager for ""
	I0916 14:27:25.480102  759664 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 14:27:25.480111  759664 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 14:27:25.480192  759664 start.go:340] cluster config:
	{Name:kubernetes-upgrade-515632 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-515632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 14:27:25.480327  759664 iso.go:125] acquiring lock: {Name:mk66d96ffbd424a8ca76a8604dfbe200d58305de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 14:27:25.482104  759664 out.go:177] * Starting "kubernetes-upgrade-515632" primary control-plane node in "kubernetes-upgrade-515632" cluster
	I0916 14:27:25.483211  759664 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 14:27:25.483254  759664 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0916 14:27:25.483261  759664 cache.go:56] Caching tarball of preloaded images
	I0916 14:27:25.483348  759664 preload.go:172] Found /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 14:27:25.483359  759664 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0916 14:27:25.483700  759664 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/config.json ...
	I0916 14:27:25.483720  759664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/config.json: {Name:mk9aa6d104f63289c2a7715b680429e6bd66aa91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 14:27:25.483865  759664 start.go:360] acquireMachinesLock for kubernetes-upgrade-515632: {Name:mke8f8f8ba61009cdea7a3d88b50b9f6ae6e1362 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 14:27:25.483925  759664 start.go:364] duration metric: took 39.66µs to acquireMachinesLock for "kubernetes-upgrade-515632"
	I0916 14:27:25.483947  759664 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-515632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-515632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 14:27:25.484009  759664 start.go:125] createHost starting for "" (driver="kvm2")
	I0916 14:27:25.485501  759664 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 14:27:25.485722  759664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 14:27:25.485779  759664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 14:27:25.502538  759664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43839
	I0916 14:27:25.502951  759664 main.go:141] libmachine: () Calling .GetVersion
	I0916 14:27:25.503464  759664 main.go:141] libmachine: Using API Version  1
	I0916 14:27:25.503485  759664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 14:27:25.503833  759664 main.go:141] libmachine: () Calling .GetMachineName
	I0916 14:27:25.504017  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetMachineName
	I0916 14:27:25.504144  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .DriverName
	I0916 14:27:25.504298  759664 start.go:159] libmachine.API.Create for "kubernetes-upgrade-515632" (driver="kvm2")
	I0916 14:27:25.504326  759664 client.go:168] LocalClient.Create starting
	I0916 14:27:25.504357  759664 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem
	I0916 14:27:25.504394  759664 main.go:141] libmachine: Decoding PEM data...
	I0916 14:27:25.504421  759664 main.go:141] libmachine: Parsing certificate...
	I0916 14:27:25.504492  759664 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem
	I0916 14:27:25.504519  759664 main.go:141] libmachine: Decoding PEM data...
	I0916 14:27:25.504540  759664 main.go:141] libmachine: Parsing certificate...
	I0916 14:27:25.504574  759664 main.go:141] libmachine: Running pre-create checks...
	I0916 14:27:25.504587  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .PreCreateCheck
	I0916 14:27:25.504882  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetConfigRaw
	I0916 14:27:25.505241  759664 main.go:141] libmachine: Creating machine...
	I0916 14:27:25.505256  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .Create
	I0916 14:27:25.505385  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Creating KVM machine...
	I0916 14:27:25.506567  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found existing default KVM network
	I0916 14:27:25.507355  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | I0916 14:27:25.507197  759703 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015300}
	I0916 14:27:25.507397  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | created network xml: 
	I0916 14:27:25.507415  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | <network>
	I0916 14:27:25.507426  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG |   <name>mk-kubernetes-upgrade-515632</name>
	I0916 14:27:25.507435  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG |   <dns enable='no'/>
	I0916 14:27:25.507443  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG |   
	I0916 14:27:25.507453  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0916 14:27:25.507462  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG |     <dhcp>
	I0916 14:27:25.507489  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0916 14:27:25.507505  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG |     </dhcp>
	I0916 14:27:25.507520  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG |   </ip>
	I0916 14:27:25.507531  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG |   
	I0916 14:27:25.507545  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | </network>
	I0916 14:27:25.507562  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | 
	I0916 14:27:25.512619  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | trying to create private KVM network mk-kubernetes-upgrade-515632 192.168.39.0/24...
	I0916 14:27:25.590686  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | private KVM network mk-kubernetes-upgrade-515632 192.168.39.0/24 created
	I0916 14:27:25.590716  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Setting up store path in /home/jenkins/minikube-integration/19652-713072/.minikube/machines/kubernetes-upgrade-515632 ...
	I0916 14:27:25.590733  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Building disk image from file:///home/jenkins/minikube-integration/19652-713072/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 14:27:25.590806  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | I0916 14:27:25.590672  759703 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 14:27:25.591007  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Downloading /home/jenkins/minikube-integration/19652-713072/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19652-713072/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 14:27:25.987845  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | I0916 14:27:25.987707  759703 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/kubernetes-upgrade-515632/id_rsa...
	I0916 14:27:26.249836  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | I0916 14:27:26.249661  759703 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/kubernetes-upgrade-515632/kubernetes-upgrade-515632.rawdisk...
	I0916 14:27:26.249880  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | Writing magic tar header
	I0916 14:27:26.249901  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | Writing SSH key tar header
	I0916 14:27:26.249924  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | I0916 14:27:26.249834  759703 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19652-713072/.minikube/machines/kubernetes-upgrade-515632 ...
	I0916 14:27:26.249941  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/kubernetes-upgrade-515632
	I0916 14:27:26.250013  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072/.minikube/machines/kubernetes-upgrade-515632 (perms=drwx------)
	I0916 14:27:26.250038  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072/.minikube/machines
	I0916 14:27:26.250050  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072/.minikube/machines (perms=drwxr-xr-x)
	I0916 14:27:26.250069  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072/.minikube (perms=drwxr-xr-x)
	I0916 14:27:26.250082  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Setting executable bit set on /home/jenkins/minikube-integration/19652-713072 (perms=drwxrwxr-x)
	I0916 14:27:26.250093  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 14:27:26.250099  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 14:27:26.250109  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Creating domain...
	I0916 14:27:26.250130  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 14:27:26.250146  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19652-713072
	I0916 14:27:26.250158  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 14:27:26.250171  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | Checking permissions on dir: /home/jenkins
	I0916 14:27:26.250182  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | Checking permissions on dir: /home
	I0916 14:27:26.250194  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | Skipping /home - not owner
	I0916 14:27:26.251211  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) define libvirt domain using xml: 
	I0916 14:27:26.251254  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) <domain type='kvm'>
	I0916 14:27:26.251269  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)   <name>kubernetes-upgrade-515632</name>
	I0916 14:27:26.251282  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)   <memory unit='MiB'>2200</memory>
	I0916 14:27:26.251294  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)   <vcpu>2</vcpu>
	I0916 14:27:26.251302  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)   <features>
	I0916 14:27:26.251313  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)     <acpi/>
	I0916 14:27:26.251318  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)     <apic/>
	I0916 14:27:26.251323  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)     <pae/>
	I0916 14:27:26.251337  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)     
	I0916 14:27:26.251345  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)   </features>
	I0916 14:27:26.251361  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)   <cpu mode='host-passthrough'>
	I0916 14:27:26.251367  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)   
	I0916 14:27:26.251372  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)   </cpu>
	I0916 14:27:26.251381  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)   <os>
	I0916 14:27:26.251390  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)     <type>hvm</type>
	I0916 14:27:26.251400  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)     <boot dev='cdrom'/>
	I0916 14:27:26.251427  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)     <boot dev='hd'/>
	I0916 14:27:26.251450  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)     <bootmenu enable='no'/>
	I0916 14:27:26.251460  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)   </os>
	I0916 14:27:26.251470  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)   <devices>
	I0916 14:27:26.251481  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)     <disk type='file' device='cdrom'>
	I0916 14:27:26.251502  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)       <source file='/home/jenkins/minikube-integration/19652-713072/.minikube/machines/kubernetes-upgrade-515632/boot2docker.iso'/>
	I0916 14:27:26.251515  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)       <target dev='hdc' bus='scsi'/>
	I0916 14:27:26.251529  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)       <readonly/>
	I0916 14:27:26.251541  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)     </disk>
	I0916 14:27:26.251551  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)     <disk type='file' device='disk'>
	I0916 14:27:26.251561  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 14:27:26.251584  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)       <source file='/home/jenkins/minikube-integration/19652-713072/.minikube/machines/kubernetes-upgrade-515632/kubernetes-upgrade-515632.rawdisk'/>
	I0916 14:27:26.251609  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)       <target dev='hda' bus='virtio'/>
	I0916 14:27:26.251627  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)     </disk>
	I0916 14:27:26.251639  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)     <interface type='network'>
	I0916 14:27:26.251650  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)       <source network='mk-kubernetes-upgrade-515632'/>
	I0916 14:27:26.251659  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)       <model type='virtio'/>
	I0916 14:27:26.251666  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)     </interface>
	I0916 14:27:26.251672  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)     <interface type='network'>
	I0916 14:27:26.251678  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)       <source network='default'/>
	I0916 14:27:26.251683  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)       <model type='virtio'/>
	I0916 14:27:26.251693  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)     </interface>
	I0916 14:27:26.251703  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)     <serial type='pty'>
	I0916 14:27:26.251717  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)       <target port='0'/>
	I0916 14:27:26.251725  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)     </serial>
	I0916 14:27:26.251730  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)     <console type='pty'>
	I0916 14:27:26.251738  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)       <target type='serial' port='0'/>
	I0916 14:27:26.251742  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)     </console>
	I0916 14:27:26.251751  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)     <rng model='virtio'>
	I0916 14:27:26.251758  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)       <backend model='random'>/dev/random</backend>
	I0916 14:27:26.251763  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)     </rng>
	I0916 14:27:26.251768  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)     
	I0916 14:27:26.251772  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)     
	I0916 14:27:26.251777  759664 main.go:141] libmachine: (kubernetes-upgrade-515632)   </devices>
	I0916 14:27:26.251782  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) </domain>
	I0916 14:27:26.251788  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) 
	I0916 14:27:26.255682  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:6d:a8:d2 in network default
	I0916 14:27:26.256214  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Ensuring networks are active...
	I0916 14:27:26.256232  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:26.256951  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Ensuring network default is active
	I0916 14:27:26.257222  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Ensuring network mk-kubernetes-upgrade-515632 is active
	I0916 14:27:26.257684  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Getting domain xml...
	I0916 14:27:26.258372  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Creating domain...
	I0916 14:27:27.557053  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Waiting to get IP...
	I0916 14:27:27.557805  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:27.558204  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | unable to find current IP address of domain kubernetes-upgrade-515632 in network mk-kubernetes-upgrade-515632
	I0916 14:27:27.558234  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | I0916 14:27:27.558158  759703 retry.go:31] will retry after 245.608549ms: waiting for machine to come up
	I0916 14:27:27.805957  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:27.806382  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | unable to find current IP address of domain kubernetes-upgrade-515632 in network mk-kubernetes-upgrade-515632
	I0916 14:27:27.806407  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | I0916 14:27:27.806347  759703 retry.go:31] will retry after 318.750931ms: waiting for machine to come up
	I0916 14:27:28.127220  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:28.127634  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | unable to find current IP address of domain kubernetes-upgrade-515632 in network mk-kubernetes-upgrade-515632
	I0916 14:27:28.127659  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | I0916 14:27:28.127604  759703 retry.go:31] will retry after 389.128516ms: waiting for machine to come up
	I0916 14:27:28.518027  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:28.518491  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | unable to find current IP address of domain kubernetes-upgrade-515632 in network mk-kubernetes-upgrade-515632
	I0916 14:27:28.518519  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | I0916 14:27:28.518447  759703 retry.go:31] will retry after 466.258463ms: waiting for machine to come up
	I0916 14:27:28.985972  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:28.986333  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | unable to find current IP address of domain kubernetes-upgrade-515632 in network mk-kubernetes-upgrade-515632
	I0916 14:27:28.986362  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | I0916 14:27:28.986289  759703 retry.go:31] will retry after 719.879079ms: waiting for machine to come up
	I0916 14:27:29.707728  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:29.708147  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | unable to find current IP address of domain kubernetes-upgrade-515632 in network mk-kubernetes-upgrade-515632
	I0916 14:27:29.708173  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | I0916 14:27:29.708067  759703 retry.go:31] will retry after 669.871726ms: waiting for machine to come up
	I0916 14:27:30.380091  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:30.380523  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | unable to find current IP address of domain kubernetes-upgrade-515632 in network mk-kubernetes-upgrade-515632
	I0916 14:27:30.380556  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | I0916 14:27:30.380467  759703 retry.go:31] will retry after 1.061056343s: waiting for machine to come up
	I0916 14:27:31.442834  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:31.443264  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | unable to find current IP address of domain kubernetes-upgrade-515632 in network mk-kubernetes-upgrade-515632
	I0916 14:27:31.443292  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | I0916 14:27:31.443223  759703 retry.go:31] will retry after 1.131645945s: waiting for machine to come up
	I0916 14:27:32.576498  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:32.577000  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | unable to find current IP address of domain kubernetes-upgrade-515632 in network mk-kubernetes-upgrade-515632
	I0916 14:27:32.577034  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | I0916 14:27:32.576902  759703 retry.go:31] will retry after 1.640481651s: waiting for machine to come up
	I0916 14:27:34.218804  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:34.219245  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | unable to find current IP address of domain kubernetes-upgrade-515632 in network mk-kubernetes-upgrade-515632
	I0916 14:27:34.219270  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | I0916 14:27:34.219204  759703 retry.go:31] will retry after 1.905112475s: waiting for machine to come up
	I0916 14:27:36.126149  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:36.126480  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | unable to find current IP address of domain kubernetes-upgrade-515632 in network mk-kubernetes-upgrade-515632
	I0916 14:27:36.126503  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | I0916 14:27:36.126422  759703 retry.go:31] will retry after 1.980131611s: waiting for machine to come up
	I0916 14:27:38.107577  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:38.107917  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | unable to find current IP address of domain kubernetes-upgrade-515632 in network mk-kubernetes-upgrade-515632
	I0916 14:27:38.107983  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | I0916 14:27:38.107894  759703 retry.go:31] will retry after 2.70400211s: waiting for machine to come up
	I0916 14:27:40.813322  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:40.813760  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | unable to find current IP address of domain kubernetes-upgrade-515632 in network mk-kubernetes-upgrade-515632
	I0916 14:27:40.813785  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | I0916 14:27:40.813711  759703 retry.go:31] will retry after 3.005849862s: waiting for machine to come up
	I0916 14:27:43.821161  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:43.821546  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | unable to find current IP address of domain kubernetes-upgrade-515632 in network mk-kubernetes-upgrade-515632
	I0916 14:27:43.821568  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | I0916 14:27:43.821492  759703 retry.go:31] will retry after 3.575492069s: waiting for machine to come up
	I0916 14:27:47.399742  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:47.400171  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has current primary IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:47.400194  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Found IP for machine: 192.168.39.161
	I0916 14:27:47.400212  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Reserving static IP address...
	I0916 14:27:47.400567  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-515632", mac: "52:54:00:18:1d:fe", ip: "192.168.39.161"} in network mk-kubernetes-upgrade-515632
	I0916 14:27:47.472682  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | Getting to WaitForSSH function...
	I0916 14:27:47.472712  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Reserved static IP address: 192.168.39.161
	I0916 14:27:47.472725  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Waiting for SSH to be available...
	I0916 14:27:47.475399  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:47.475823  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:27:40 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:minikube Clientid:01:52:54:00:18:1d:fe}
	I0916 14:27:47.475848  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:47.475980  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | Using SSH client type: external
	I0916 14:27:47.476025  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | Using SSH private key: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/kubernetes-upgrade-515632/id_rsa (-rw-------)
	I0916 14:27:47.476075  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.161 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19652-713072/.minikube/machines/kubernetes-upgrade-515632/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 14:27:47.476099  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | About to run SSH command:
	I0916 14:27:47.476113  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | exit 0
	I0916 14:27:47.605509  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | SSH cmd err, output: <nil>: 
	I0916 14:27:47.605707  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) KVM machine creation complete!
	I0916 14:27:47.606110  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetConfigRaw
	I0916 14:27:47.606803  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .DriverName
	I0916 14:27:47.607027  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .DriverName
	I0916 14:27:47.607207  759664 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 14:27:47.607225  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetState
	I0916 14:27:47.608594  759664 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 14:27:47.608608  759664 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 14:27:47.608613  759664 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 14:27:47.608619  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHHostname
	I0916 14:27:47.610888  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:47.611232  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:27:40 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:27:47.611260  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:47.611388  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHPort
	I0916 14:27:47.611569  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:27:47.611706  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:27:47.611827  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHUsername
	I0916 14:27:47.612039  759664 main.go:141] libmachine: Using SSH client type: native
	I0916 14:27:47.612285  759664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0916 14:27:47.612300  759664 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 14:27:47.720648  759664 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 14:27:47.720671  759664 main.go:141] libmachine: Detecting the provisioner...
	I0916 14:27:47.720679  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHHostname
	I0916 14:27:47.723270  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:47.723602  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:27:40 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:27:47.723634  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:47.723800  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHPort
	I0916 14:27:47.723994  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:27:47.724200  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:27:47.724315  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHUsername
	I0916 14:27:47.724491  759664 main.go:141] libmachine: Using SSH client type: native
	I0916 14:27:47.724656  759664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0916 14:27:47.724667  759664 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 14:27:47.834216  759664 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 14:27:47.834320  759664 main.go:141] libmachine: found compatible host: buildroot
	I0916 14:27:47.834333  759664 main.go:141] libmachine: Provisioning with buildroot...
	I0916 14:27:47.834345  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetMachineName
	I0916 14:27:47.834572  759664 buildroot.go:166] provisioning hostname "kubernetes-upgrade-515632"
	I0916 14:27:47.834603  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetMachineName
	I0916 14:27:47.834783  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHHostname
	I0916 14:27:47.837133  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:47.837695  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:27:40 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:27:47.837725  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:47.837816  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHPort
	I0916 14:27:47.837983  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:27:47.838153  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:27:47.838244  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHUsername
	I0916 14:27:47.838383  759664 main.go:141] libmachine: Using SSH client type: native
	I0916 14:27:47.838550  759664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0916 14:27:47.838562  759664 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-515632 && echo "kubernetes-upgrade-515632" | sudo tee /etc/hostname
	I0916 14:27:47.964958  759664 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-515632
	
	I0916 14:27:47.964989  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHHostname
	I0916 14:27:47.967889  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:47.968243  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:27:40 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:27:47.968271  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:47.968445  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHPort
	I0916 14:27:47.968624  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:27:47.968754  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:27:47.968858  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHUsername
	I0916 14:27:47.969001  759664 main.go:141] libmachine: Using SSH client type: native
	I0916 14:27:47.969181  759664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0916 14:27:47.969198  759664 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-515632' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-515632/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-515632' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 14:27:48.089633  759664 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 14:27:48.089680  759664 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19652-713072/.minikube CaCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19652-713072/.minikube}
	I0916 14:27:48.089748  759664 buildroot.go:174] setting up certificates
	I0916 14:27:48.089768  759664 provision.go:84] configureAuth start
	I0916 14:27:48.089790  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetMachineName
	I0916 14:27:48.090059  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetIP
	I0916 14:27:48.092254  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:48.092558  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:27:40 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:27:48.092595  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:48.092677  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHHostname
	I0916 14:27:48.095218  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:48.095551  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:27:40 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:27:48.095577  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:48.095673  759664 provision.go:143] copyHostCerts
	I0916 14:27:48.095729  759664 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem, removing ...
	I0916 14:27:48.095743  759664 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem
	I0916 14:27:48.095812  759664 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem (1082 bytes)
	I0916 14:27:48.095984  759664 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem, removing ...
	I0916 14:27:48.095997  759664 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem
	I0916 14:27:48.096034  759664 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem (1123 bytes)
	I0916 14:27:48.096105  759664 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem, removing ...
	I0916 14:27:48.096116  759664 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem
	I0916 14:27:48.096156  759664 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem (1679 bytes)
	I0916 14:27:48.096215  759664 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-515632 san=[127.0.0.1 192.168.39.161 kubernetes-upgrade-515632 localhost minikube]
	I0916 14:27:48.179126  759664 provision.go:177] copyRemoteCerts
	I0916 14:27:48.179185  759664 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 14:27:48.179214  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHHostname
	I0916 14:27:48.181458  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:48.181783  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:27:40 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:27:48.181812  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:48.181982  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHPort
	I0916 14:27:48.182166  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:27:48.182318  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHUsername
	I0916 14:27:48.182440  759664 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/kubernetes-upgrade-515632/id_rsa Username:docker}
	I0916 14:27:48.267535  759664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 14:27:48.290602  759664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0916 14:27:48.313019  759664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 14:27:48.334867  759664 provision.go:87] duration metric: took 245.080475ms to configureAuth
	I0916 14:27:48.334895  759664 buildroot.go:189] setting minikube options for container-runtime
	I0916 14:27:48.335041  759664 config.go:182] Loaded profile config "kubernetes-upgrade-515632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0916 14:27:48.335150  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHHostname
	I0916 14:27:48.337834  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:48.338187  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:27:40 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:27:48.338210  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:48.338366  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHPort
	I0916 14:27:48.338576  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:27:48.338742  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:27:48.338882  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHUsername
	I0916 14:27:48.339036  759664 main.go:141] libmachine: Using SSH client type: native
	I0916 14:27:48.339193  759664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0916 14:27:48.339207  759664 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 14:27:48.561046  759664 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 14:27:48.561077  759664 main.go:141] libmachine: Checking connection to Docker...
	I0916 14:27:48.561089  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetURL
	I0916 14:27:48.562417  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | Using libvirt version 6000000
	I0916 14:27:48.564907  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:48.565235  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:27:40 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:27:48.565271  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:48.565463  759664 main.go:141] libmachine: Docker is up and running!
	I0916 14:27:48.565477  759664 main.go:141] libmachine: Reticulating splines...
	I0916 14:27:48.565485  759664 client.go:171] duration metric: took 23.061149984s to LocalClient.Create
	I0916 14:27:48.565512  759664 start.go:167] duration metric: took 23.061214655s to libmachine.API.Create "kubernetes-upgrade-515632"
	I0916 14:27:48.565525  759664 start.go:293] postStartSetup for "kubernetes-upgrade-515632" (driver="kvm2")
	I0916 14:27:48.565561  759664 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 14:27:48.565598  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .DriverName
	I0916 14:27:48.565854  759664 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 14:27:48.565885  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHHostname
	I0916 14:27:48.568199  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:48.568708  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:27:40 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:27:48.568741  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:48.568947  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHPort
	I0916 14:27:48.569145  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:27:48.569275  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHUsername
	I0916 14:27:48.569400  759664 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/kubernetes-upgrade-515632/id_rsa Username:docker}
	I0916 14:27:48.651073  759664 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 14:27:48.654991  759664 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 14:27:48.655012  759664 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/addons for local assets ...
	I0916 14:27:48.655072  759664 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/files for local assets ...
	I0916 14:27:48.655184  759664 filesync.go:149] local asset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> 7205442.pem in /etc/ssl/certs
	I0916 14:27:48.655313  759664 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 14:27:48.664277  759664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 14:27:48.687043  759664 start.go:296] duration metric: took 121.504601ms for postStartSetup
	I0916 14:27:48.687096  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetConfigRaw
	I0916 14:27:48.687697  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetIP
	I0916 14:27:48.690380  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:48.690711  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:27:40 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:27:48.690731  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:48.690921  759664 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/config.json ...
	I0916 14:27:48.691099  759664 start.go:128] duration metric: took 23.207079004s to createHost
	I0916 14:27:48.691122  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHHostname
	I0916 14:27:48.693378  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:48.693707  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:27:40 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:27:48.693738  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:48.693858  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHPort
	I0916 14:27:48.694054  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:27:48.694228  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:27:48.694405  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHUsername
	I0916 14:27:48.694550  759664 main.go:141] libmachine: Using SSH client type: native
	I0916 14:27:48.694715  759664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0916 14:27:48.694730  759664 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 14:27:48.802113  759664 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726496868.773812722
	
	I0916 14:27:48.802139  759664 fix.go:216] guest clock: 1726496868.773812722
	I0916 14:27:48.802146  759664 fix.go:229] Guest: 2024-09-16 14:27:48.773812722 +0000 UTC Remote: 2024-09-16 14:27:48.691110568 +0000 UTC m=+23.319749284 (delta=82.702154ms)
	I0916 14:27:48.802183  759664 fix.go:200] guest clock delta is within tolerance: 82.702154ms
	I0916 14:27:48.802189  759664 start.go:83] releasing machines lock for "kubernetes-upgrade-515632", held for 23.31825161s
	I0916 14:27:48.802213  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .DriverName
	I0916 14:27:48.802489  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetIP
	I0916 14:27:48.805135  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:48.805443  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:27:40 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:27:48.805467  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:48.805629  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .DriverName
	I0916 14:27:48.806175  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .DriverName
	I0916 14:27:48.806353  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .DriverName
	I0916 14:27:48.806432  759664 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 14:27:48.806497  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHHostname
	I0916 14:27:48.806579  759664 ssh_runner.go:195] Run: cat /version.json
	I0916 14:27:48.806609  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHHostname
	I0916 14:27:48.809377  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:48.809645  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:48.809818  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:27:40 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:27:48.809849  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:48.810061  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:27:40 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:27:48.810091  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:48.810103  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHPort
	I0916 14:27:48.810227  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHPort
	I0916 14:27:48.810340  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:27:48.810410  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:27:48.810476  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHUsername
	I0916 14:27:48.810605  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHUsername
	I0916 14:27:48.810601  759664 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/kubernetes-upgrade-515632/id_rsa Username:docker}
	I0916 14:27:48.810872  759664 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/kubernetes-upgrade-515632/id_rsa Username:docker}
	I0916 14:27:48.891279  759664 ssh_runner.go:195] Run: systemctl --version
	I0916 14:27:48.915220  759664 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 14:27:49.073329  759664 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 14:27:49.079540  759664 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 14:27:49.079623  759664 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 14:27:49.094664  759664 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 14:27:49.094688  759664 start.go:495] detecting cgroup driver to use...
	I0916 14:27:49.094760  759664 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 14:27:49.109872  759664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 14:27:49.123559  759664 docker.go:217] disabling cri-docker service (if available) ...
	I0916 14:27:49.123624  759664 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 14:27:49.136870  759664 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 14:27:49.149919  759664 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 14:27:49.259396  759664 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 14:27:49.432872  759664 docker.go:233] disabling docker service ...
	I0916 14:27:49.432966  759664 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 14:27:49.446696  759664 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 14:27:49.458829  759664 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 14:27:49.592220  759664 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 14:27:49.719258  759664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 14:27:49.732796  759664 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 14:27:49.751127  759664 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0916 14:27:49.751210  759664 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:27:49.761305  759664 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 14:27:49.761365  759664 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:27:49.771918  759664 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:27:49.781891  759664 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:27:49.791921  759664 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 14:27:49.802644  759664 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 14:27:49.814066  759664 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 14:27:49.814150  759664 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 14:27:49.828367  759664 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 14:27:49.837773  759664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 14:27:49.963186  759664 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 14:27:50.064423  759664 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 14:27:50.064509  759664 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 14:27:50.069222  759664 start.go:563] Will wait 60s for crictl version
	I0916 14:27:50.069285  759664 ssh_runner.go:195] Run: which crictl
	I0916 14:27:50.072762  759664 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 14:27:50.120411  759664 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 14:27:50.120500  759664 ssh_runner.go:195] Run: crio --version
	I0916 14:27:50.154600  759664 ssh_runner.go:195] Run: crio --version
	I0916 14:27:50.189686  759664 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0916 14:27:50.190791  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetIP
	I0916 14:27:50.193877  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:50.194299  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:27:40 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:27:50.194329  759664 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:27:50.194580  759664 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 14:27:50.198583  759664 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 14:27:50.211127  759664 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-515632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-515632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 14:27:50.211236  759664 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 14:27:50.211285  759664 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 14:27:50.241489  759664 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0916 14:27:50.241569  759664 ssh_runner.go:195] Run: which lz4
	I0916 14:27:50.245486  759664 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 14:27:50.249471  759664 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 14:27:50.249500  759664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0916 14:27:51.914982  759664 crio.go:462] duration metric: took 1.669538267s to copy over tarball
	I0916 14:27:51.915065  759664 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 14:27:54.401166  759664 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.486065523s)
	I0916 14:27:54.401194  759664 crio.go:469] duration metric: took 2.486184883s to extract the tarball
	I0916 14:27:54.401202  759664 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 14:27:54.443489  759664 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 14:27:54.492014  759664 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0916 14:27:54.492051  759664 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0916 14:27:54.492117  759664 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 14:27:54.492137  759664 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 14:27:54.492155  759664 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 14:27:54.492174  759664 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0916 14:27:54.492243  759664 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 14:27:54.492258  759664 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0916 14:27:54.492263  759664 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0916 14:27:54.492193  759664 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 14:27:54.493860  759664 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 14:27:54.493890  759664 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 14:27:54.493938  759664 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 14:27:54.493861  759664 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 14:27:54.494029  759664 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0916 14:27:54.493861  759664 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0916 14:27:54.494154  759664 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0916 14:27:54.494240  759664 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 14:27:54.667039  759664 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0916 14:27:54.684071  759664 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0916 14:27:54.712660  759664 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0916 14:27:54.725418  759664 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0916 14:27:54.725466  759664 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 14:27:54.725512  759664 ssh_runner.go:195] Run: which crictl
	I0916 14:27:54.736986  759664 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0916 14:27:54.737046  759664 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0916 14:27:54.737093  759664 ssh_runner.go:195] Run: which crictl
	I0916 14:27:54.737467  759664 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0916 14:27:54.759805  759664 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0916 14:27:54.759853  759664 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 14:27:54.759875  759664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 14:27:54.759891  759664 ssh_runner.go:195] Run: which crictl
	I0916 14:27:54.759938  759664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 14:27:54.786476  759664 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0916 14:27:54.786524  759664 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0916 14:27:54.786568  759664 ssh_runner.go:195] Run: which crictl
	I0916 14:27:54.816427  759664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 14:27:54.816427  759664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 14:27:54.822071  759664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 14:27:54.822088  759664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 14:27:54.904646  759664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 14:27:54.904698  759664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 14:27:54.906264  759664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 14:27:54.906325  759664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 14:27:54.974969  759664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 14:27:54.979655  759664 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0916 14:27:54.979655  759664 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0916 14:27:54.993952  759664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 14:27:55.024090  759664 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0916 14:27:55.042023  759664 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0916 14:27:55.067890  759664 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 14:27:55.106529  759664 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0916 14:27:55.106573  759664 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 14:27:55.106618  759664 ssh_runner.go:195] Run: which crictl
	I0916 14:27:55.110473  759664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 14:27:55.122530  759664 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0916 14:27:55.146527  759664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 14:27:55.173051  759664 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0916 14:27:55.173105  759664 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0916 14:27:55.173175  759664 ssh_runner.go:195] Run: which crictl
	I0916 14:27:55.192307  759664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 14:27:55.192538  759664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 14:27:55.234934  759664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 14:27:55.234951  759664 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0916 14:27:55.266477  759664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 14:27:55.296719  759664 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0916 14:27:55.317980  759664 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 14:27:55.866121  759664 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0916 14:27:55.911771  759664 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0916 14:27:55.911815  759664 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 14:27:55.911882  759664 ssh_runner.go:195] Run: which crictl
	I0916 14:27:55.916560  759664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 14:27:55.954397  759664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 14:27:55.986123  759664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 14:27:56.018150  759664 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19652-713072/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0916 14:27:56.018219  759664 cache_images.go:92] duration metric: took 1.526151764s to LoadCachedImages
	W0916 14:27:56.018281  759664 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19652-713072/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19652-713072/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0916 14:27:56.018298  759664 kubeadm.go:934] updating node { 192.168.39.161 8443 v1.20.0 crio true true} ...
	I0916 14:27:56.018440  759664 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-515632 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-515632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 14:27:56.018506  759664 ssh_runner.go:195] Run: crio config
	I0916 14:27:56.081123  759664 cni.go:84] Creating CNI manager for ""
	I0916 14:27:56.081150  759664 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 14:27:56.081163  759664 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 14:27:56.081185  759664 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.161 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-515632 NodeName:kubernetes-upgrade-515632 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0916 14:27:56.081354  759664 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-515632"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.161
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.161"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 14:27:56.081441  759664 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0916 14:27:56.091737  759664 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 14:27:56.091826  759664 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 14:27:56.101038  759664 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0916 14:27:56.118970  759664 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 14:27:56.136576  759664 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0916 14:27:56.154335  759664 ssh_runner.go:195] Run: grep 192.168.39.161	control-plane.minikube.internal$ /etc/hosts
	I0916 14:27:56.158180  759664 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.161	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 14:27:56.170277  759664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 14:27:56.294859  759664 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 14:27:56.312651  759664 certs.go:68] Setting up /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632 for IP: 192.168.39.161
	I0916 14:27:56.312677  759664 certs.go:194] generating shared ca certs ...
	I0916 14:27:56.312699  759664 certs.go:226] acquiring lock for ca certs: {Name:mk25b35916ff3ff3777938e3e2b7794965f8a707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 14:27:56.312883  759664 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key
	I0916 14:27:56.312922  759664 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key
	I0916 14:27:56.312932  759664 certs.go:256] generating profile certs ...
	I0916 14:27:56.312987  759664 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/client.key
	I0916 14:27:56.313006  759664 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/client.crt with IP's: []
	I0916 14:27:56.952516  759664 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/client.crt ...
	I0916 14:27:56.952554  759664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/client.crt: {Name:mk82ecca5b96932351d370de945f29607bf339c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 14:27:56.952725  759664 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/client.key ...
	I0916 14:27:56.952739  759664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/client.key: {Name:mkb79ffd0b022ea3e22d41cbd2a8dcd39ee37344 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 14:27:56.952814  759664 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/apiserver.key.0d786eb0
	I0916 14:27:56.952833  759664 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/apiserver.crt.0d786eb0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.161]
	I0916 14:27:57.197659  759664 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/apiserver.crt.0d786eb0 ...
	I0916 14:27:57.197704  759664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/apiserver.crt.0d786eb0: {Name:mk2b9d7833bcb4bc86810f447c57ff310098b607 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 14:27:57.197890  759664 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/apiserver.key.0d786eb0 ...
	I0916 14:27:57.197910  759664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/apiserver.key.0d786eb0: {Name:mk123d1bfa7aef5721f2403e4745c0442b6446f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 14:27:57.198009  759664 certs.go:381] copying /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/apiserver.crt.0d786eb0 -> /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/apiserver.crt
	I0916 14:27:57.198127  759664 certs.go:385] copying /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/apiserver.key.0d786eb0 -> /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/apiserver.key
	I0916 14:27:57.198215  759664 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/proxy-client.key
	I0916 14:27:57.198238  759664 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/proxy-client.crt with IP's: []
	I0916 14:27:57.352414  759664 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/proxy-client.crt ...
	I0916 14:27:57.352449  759664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/proxy-client.crt: {Name:mk109af067d7e688d71dde90995ca073d1ba1386 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 14:27:57.352635  759664 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/proxy-client.key ...
	I0916 14:27:57.352654  759664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/proxy-client.key: {Name:mk6479151f4ea37541face79dd4e769c7e218d6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 14:27:57.352849  759664 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem (1338 bytes)
	W0916 14:27:57.352899  759664 certs.go:480] ignoring /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544_empty.pem, impossibly tiny 0 bytes
	I0916 14:27:57.352915  759664 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 14:27:57.352950  759664 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem (1082 bytes)
	I0916 14:27:57.352983  759664 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem (1123 bytes)
	I0916 14:27:57.353015  759664 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem (1679 bytes)
	I0916 14:27:57.353072  759664 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 14:27:57.353664  759664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 14:27:57.379384  759664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 14:27:57.405637  759664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 14:27:57.429389  759664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 14:27:57.452731  759664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 14:27:57.476517  759664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 14:27:57.499582  759664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 14:27:57.522855  759664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 14:27:57.546240  759664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /usr/share/ca-certificates/7205442.pem (1708 bytes)
	I0916 14:27:57.569214  759664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 14:27:57.593480  759664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem --> /usr/share/ca-certificates/720544.pem (1338 bytes)
	I0916 14:27:57.619157  759664 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 14:27:57.636524  759664 ssh_runner.go:195] Run: openssl version
	I0916 14:27:57.642312  759664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 14:27:57.654128  759664 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 14:27:57.658536  759664 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 12:53 /usr/share/ca-certificates/minikubeCA.pem
	I0916 14:27:57.658593  759664 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 14:27:57.664255  759664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 14:27:57.675042  759664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/720544.pem && ln -fs /usr/share/ca-certificates/720544.pem /etc/ssl/certs/720544.pem"
	I0916 14:27:57.685796  759664 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/720544.pem
	I0916 14:27:57.690162  759664 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 13:33 /usr/share/ca-certificates/720544.pem
	I0916 14:27:57.690232  759664 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/720544.pem
	I0916 14:27:57.695735  759664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/720544.pem /etc/ssl/certs/51391683.0"
	I0916 14:27:57.706064  759664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7205442.pem && ln -fs /usr/share/ca-certificates/7205442.pem /etc/ssl/certs/7205442.pem"
	I0916 14:27:57.716547  759664 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7205442.pem
	I0916 14:27:57.722207  759664 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 13:33 /usr/share/ca-certificates/7205442.pem
	I0916 14:27:57.722264  759664 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7205442.pem
	I0916 14:27:57.728008  759664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7205442.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 14:27:57.738702  759664 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 14:27:57.742599  759664 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 14:27:57.742659  759664 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-515632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-515632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 14:27:57.742753  759664 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 14:27:57.742830  759664 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 14:27:57.781841  759664 cri.go:89] found id: ""
	I0916 14:27:57.781921  759664 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 14:27:57.792165  759664 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 14:27:57.801581  759664 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 14:27:57.811169  759664 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 14:27:57.811193  759664 kubeadm.go:157] found existing configuration files:
	
	I0916 14:27:57.811247  759664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 14:27:57.820093  759664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 14:27:57.820159  759664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 14:27:57.829580  759664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 14:27:57.838341  759664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 14:27:57.838395  759664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 14:27:57.849776  759664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 14:27:57.858552  759664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 14:27:57.858615  759664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 14:27:57.867426  759664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 14:27:57.876125  759664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 14:27:57.876174  759664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 14:27:57.885423  759664 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 14:27:58.155895  759664 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 14:29:56.175353  759664 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0916 14:29:56.175463  759664 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0916 14:29:56.177356  759664 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0916 14:29:56.177430  759664 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 14:29:56.177535  759664 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 14:29:56.177713  759664 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 14:29:56.177839  759664 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0916 14:29:56.177954  759664 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 14:29:56.301719  759664 out.go:235]   - Generating certificates and keys ...
	I0916 14:29:56.301853  759664 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 14:29:56.301920  759664 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 14:29:56.301989  759664 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 14:29:56.302091  759664 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 14:29:56.302187  759664 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 14:29:56.302257  759664 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 14:29:56.302359  759664 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 14:29:56.302537  759664 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-515632 localhost] and IPs [192.168.39.161 127.0.0.1 ::1]
	I0916 14:29:56.302587  759664 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 14:29:56.302699  759664 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-515632 localhost] and IPs [192.168.39.161 127.0.0.1 ::1]
	I0916 14:29:56.302755  759664 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 14:29:56.302813  759664 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 14:29:56.302853  759664 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 14:29:56.302909  759664 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 14:29:56.302968  759664 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 14:29:56.303042  759664 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 14:29:56.303132  759664 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 14:29:56.303208  759664 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 14:29:56.303383  759664 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 14:29:56.303508  759664 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 14:29:56.303560  759664 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 14:29:56.303647  759664 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 14:29:56.334063  759664 out.go:235]   - Booting up control plane ...
	I0916 14:29:56.334189  759664 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 14:29:56.334301  759664 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 14:29:56.334397  759664 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 14:29:56.334516  759664 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 14:29:56.334750  759664 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0916 14:29:56.334829  759664 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0916 14:29:56.334941  759664 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 14:29:56.335156  759664 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 14:29:56.335223  759664 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 14:29:56.335437  759664 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 14:29:56.335535  759664 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 14:29:56.335767  759664 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 14:29:56.335864  759664 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 14:29:56.336084  759664 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 14:29:56.336180  759664 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 14:29:56.336399  759664 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 14:29:56.336408  759664 kubeadm.go:310] 
	I0916 14:29:56.336454  759664 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0916 14:29:56.336496  759664 kubeadm.go:310] 		timed out waiting for the condition
	I0916 14:29:56.336502  759664 kubeadm.go:310] 
	I0916 14:29:56.336531  759664 kubeadm.go:310] 	This error is likely caused by:
	I0916 14:29:56.336563  759664 kubeadm.go:310] 		- The kubelet is not running
	I0916 14:29:56.336672  759664 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0916 14:29:56.336680  759664 kubeadm.go:310] 
	I0916 14:29:56.336765  759664 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0916 14:29:56.336797  759664 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0916 14:29:56.336825  759664 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0916 14:29:56.336831  759664 kubeadm.go:310] 
	I0916 14:29:56.336928  759664 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0916 14:29:56.337008  759664 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0916 14:29:56.337015  759664 kubeadm.go:310] 
	I0916 14:29:56.337113  759664 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0916 14:29:56.337207  759664 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0916 14:29:56.337320  759664 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0916 14:29:56.337440  759664 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0916 14:29:56.337471  759664 kubeadm.go:310] 
	W0916 14:29:56.337598  759664 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-515632 localhost] and IPs [192.168.39.161 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-515632 localhost] and IPs [192.168.39.161 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-515632 localhost] and IPs [192.168.39.161 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-515632 localhost] and IPs [192.168.39.161 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0916 14:29:56.337641  759664 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0916 14:29:56.810845  759664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 14:29:56.830070  759664 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 14:29:56.841366  759664 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 14:29:56.841401  759664 kubeadm.go:157] found existing configuration files:
	
	I0916 14:29:56.841458  759664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 14:29:56.853540  759664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 14:29:56.853604  759664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 14:29:56.865893  759664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 14:29:56.874682  759664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 14:29:56.874747  759664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 14:29:56.886935  759664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 14:29:56.896631  759664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 14:29:56.896699  759664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 14:29:56.910374  759664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 14:29:56.920018  759664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 14:29:56.920087  759664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 14:29:56.929822  759664 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 14:29:57.010548  759664 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0916 14:29:57.010704  759664 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 14:29:57.186601  759664 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 14:29:57.186866  759664 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 14:29:57.187021  759664 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0916 14:29:57.403598  759664 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 14:29:57.405815  759664 out.go:235]   - Generating certificates and keys ...
	I0916 14:29:57.406005  759664 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 14:29:57.406169  759664 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 14:29:57.406289  759664 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0916 14:29:57.406369  759664 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0916 14:29:57.406467  759664 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0916 14:29:57.406546  759664 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0916 14:29:57.406767  759664 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0916 14:29:57.407284  759664 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0916 14:29:57.407819  759664 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0916 14:29:57.408233  759664 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0916 14:29:57.408404  759664 kubeadm.go:310] [certs] Using the existing "sa" key
	I0916 14:29:57.408537  759664 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 14:29:57.570751  759664 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 14:29:57.653188  759664 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 14:29:57.803936  759664 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 14:29:57.984926  759664 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 14:29:58.010153  759664 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 14:29:58.011789  759664 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 14:29:58.011880  759664 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 14:29:58.181024  759664 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 14:29:58.183532  759664 out.go:235]   - Booting up control plane ...
	I0916 14:29:58.183671  759664 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 14:29:58.198679  759664 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 14:29:58.200798  759664 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 14:29:58.201932  759664 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 14:29:58.209297  759664 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0916 14:30:38.211621  759664 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0916 14:30:38.212150  759664 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 14:30:38.212354  759664 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 14:30:43.212792  759664 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 14:30:43.213014  759664 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 14:30:53.213730  759664 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 14:30:53.214017  759664 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 14:31:13.213061  759664 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 14:31:13.213370  759664 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 14:31:53.213185  759664 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 14:31:53.213463  759664 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 14:31:53.213476  759664 kubeadm.go:310] 
	I0916 14:31:53.213603  759664 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0916 14:31:53.213691  759664 kubeadm.go:310] 		timed out waiting for the condition
	I0916 14:31:53.213718  759664 kubeadm.go:310] 
	I0916 14:31:53.213773  759664 kubeadm.go:310] 	This error is likely caused by:
	I0916 14:31:53.213823  759664 kubeadm.go:310] 		- The kubelet is not running
	I0916 14:31:53.213972  759664 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0916 14:31:53.213985  759664 kubeadm.go:310] 
	I0916 14:31:53.214128  759664 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0916 14:31:53.214203  759664 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0916 14:31:53.214246  759664 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0916 14:31:53.214256  759664 kubeadm.go:310] 
	I0916 14:31:53.214409  759664 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0916 14:31:53.214524  759664 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0916 14:31:53.214546  759664 kubeadm.go:310] 
	I0916 14:31:53.214703  759664 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0916 14:31:53.214826  759664 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0916 14:31:53.214935  759664 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0916 14:31:53.215048  759664 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0916 14:31:53.215058  759664 kubeadm.go:310] 
	I0916 14:31:53.215670  759664 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 14:31:53.215785  759664 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0916 14:31:53.215878  759664 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0916 14:31:53.215962  759664 kubeadm.go:394] duration metric: took 3m55.473307378s to StartCluster
	I0916 14:31:53.216009  759664 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 14:31:53.216070  759664 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 14:31:53.268010  759664 cri.go:89] found id: ""
	I0916 14:31:53.268037  759664 logs.go:276] 0 containers: []
	W0916 14:31:53.268050  759664 logs.go:278] No container was found matching "kube-apiserver"
	I0916 14:31:53.268057  759664 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 14:31:53.268124  759664 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 14:31:53.302563  759664 cri.go:89] found id: ""
	I0916 14:31:53.302601  759664 logs.go:276] 0 containers: []
	W0916 14:31:53.302611  759664 logs.go:278] No container was found matching "etcd"
	I0916 14:31:53.302618  759664 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 14:31:53.302686  759664 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 14:31:53.339632  759664 cri.go:89] found id: ""
	I0916 14:31:53.339664  759664 logs.go:276] 0 containers: []
	W0916 14:31:53.339674  759664 logs.go:278] No container was found matching "coredns"
	I0916 14:31:53.339683  759664 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 14:31:53.339763  759664 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 14:31:53.380235  759664 cri.go:89] found id: ""
	I0916 14:31:53.380262  759664 logs.go:276] 0 containers: []
	W0916 14:31:53.380269  759664 logs.go:278] No container was found matching "kube-scheduler"
	I0916 14:31:53.380282  759664 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 14:31:53.380331  759664 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 14:31:53.418175  759664 cri.go:89] found id: ""
	I0916 14:31:53.418201  759664 logs.go:276] 0 containers: []
	W0916 14:31:53.418210  759664 logs.go:278] No container was found matching "kube-proxy"
	I0916 14:31:53.418217  759664 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 14:31:53.418333  759664 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 14:31:53.458781  759664 cri.go:89] found id: ""
	I0916 14:31:53.458812  759664 logs.go:276] 0 containers: []
	W0916 14:31:53.458821  759664 logs.go:278] No container was found matching "kube-controller-manager"
	I0916 14:31:53.458827  759664 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 14:31:53.458891  759664 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 14:31:53.501913  759664 cri.go:89] found id: ""
	I0916 14:31:53.501943  759664 logs.go:276] 0 containers: []
	W0916 14:31:53.501955  759664 logs.go:278] No container was found matching "kindnet"
	I0916 14:31:53.501968  759664 logs.go:123] Gathering logs for kubelet ...
	I0916 14:31:53.501985  759664 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 14:31:53.552397  759664 logs.go:123] Gathering logs for dmesg ...
	I0916 14:31:53.552427  759664 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 14:31:53.566861  759664 logs.go:123] Gathering logs for describe nodes ...
	I0916 14:31:53.566911  759664 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 14:31:53.696880  759664 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 14:31:53.696914  759664 logs.go:123] Gathering logs for CRI-O ...
	I0916 14:31:53.696935  759664 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 14:31:53.833848  759664 logs.go:123] Gathering logs for container status ...
	I0916 14:31:53.833890  759664 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0916 14:31:53.874042  759664 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0916 14:31:53.874117  759664 out.go:270] * 
	* 
	W0916 14:31:53.874187  759664 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0916 14:31:53.874213  759664 out.go:270] * 
	* 
	W0916 14:31:53.875378  759664 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 14:31:53.878413  759664 out.go:201] 
	W0916 14:31:53.879493  759664 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0916 14:31:53.879530  759664 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0916 14:31:53.879555  759664 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0916 14:31:53.880981  759664 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-515632 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-515632
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-515632: (1.392014589s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-515632 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-515632 status --format={{.Host}}: exit status 7 (66.570315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-515632 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-515632 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.485453037s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-515632 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-515632 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-515632 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (79.79239ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-515632] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19652
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19652-713072/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19652-713072/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-515632
	    minikube start -p kubernetes-upgrade-515632 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5156322 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-515632 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-515632 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-515632 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (47.222607205s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-09-16 14:33:27.232915768 +0000 UTC m=+6071.589253224
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-515632 -n kubernetes-upgrade-515632
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-515632 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-515632 logs -n 25: (1.911084457s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|----------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      | End Time |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|----------|
	| ssh     | -p kubenet-733905 sudo                               | kubenet-733905 | jenkins | v1.34.0 | 16 Sep 24 14:33 UTC |          |
	|         | iptables -t nat -L -n -v                             |                |         |         |                     |          |
	| ssh     | -p kubenet-733905 sudo                               | kubenet-733905 | jenkins | v1.34.0 | 16 Sep 24 14:33 UTC |          |
	|         | systemctl status kubelet --all                       |                |         |         |                     |          |
	|         | --full --no-pager                                    |                |         |         |                     |          |
	| ssh     | -p kubenet-733905 sudo                               | kubenet-733905 | jenkins | v1.34.0 | 16 Sep 24 14:33 UTC |          |
	|         | systemctl cat kubelet                                |                |         |         |                     |          |
	|         | --no-pager                                           |                |         |         |                     |          |
	| ssh     | -p kubenet-733905 sudo                               | kubenet-733905 | jenkins | v1.34.0 | 16 Sep 24 14:33 UTC |          |
	|         | journalctl -xeu kubelet --all                        |                |         |         |                     |          |
	|         | --full --no-pager                                    |                |         |         |                     |          |
	| ssh     | -p kubenet-733905 sudo cat                           | kubenet-733905 | jenkins | v1.34.0 | 16 Sep 24 14:33 UTC |          |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |          |
	| ssh     | -p kubenet-733905 sudo cat                           | kubenet-733905 | jenkins | v1.34.0 | 16 Sep 24 14:33 UTC |          |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |          |
	| ssh     | -p kubenet-733905 sudo                               | kubenet-733905 | jenkins | v1.34.0 | 16 Sep 24 14:33 UTC |          |
	|         | systemctl status docker --all                        |                |         |         |                     |          |
	|         | --full --no-pager                                    |                |         |         |                     |          |
	| ssh     | -p kubenet-733905 sudo                               | kubenet-733905 | jenkins | v1.34.0 | 16 Sep 24 14:33 UTC |          |
	|         | systemctl cat docker                                 |                |         |         |                     |          |
	|         | --no-pager                                           |                |         |         |                     |          |
	| ssh     | -p kubenet-733905 sudo cat                           | kubenet-733905 | jenkins | v1.34.0 | 16 Sep 24 14:33 UTC |          |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |          |
	| ssh     | -p kubenet-733905 sudo docker                        | kubenet-733905 | jenkins | v1.34.0 | 16 Sep 24 14:33 UTC |          |
	|         | system info                                          |                |         |         |                     |          |
	| ssh     | -p kubenet-733905 sudo                               | kubenet-733905 | jenkins | v1.34.0 | 16 Sep 24 14:33 UTC |          |
	|         | systemctl status cri-docker                          |                |         |         |                     |          |
	|         | --all --full --no-pager                              |                |         |         |                     |          |
	| ssh     | -p kubenet-733905 sudo                               | kubenet-733905 | jenkins | v1.34.0 | 16 Sep 24 14:33 UTC |          |
	|         | systemctl cat cri-docker                             |                |         |         |                     |          |
	|         | --no-pager                                           |                |         |         |                     |          |
	| ssh     | -p kubenet-733905 sudo cat                           | kubenet-733905 | jenkins | v1.34.0 | 16 Sep 24 14:33 UTC |          |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |          |
	| pause   | -p pause-563108                                      | pause-563108   | jenkins | v1.34.0 | 16 Sep 24 14:33 UTC |          |
	|         | --alsologtostderr -v=5                               |                |         |         |                     |          |
	| ssh     | -p kubenet-733905 sudo cat                           | kubenet-733905 | jenkins | v1.34.0 | 16 Sep 24 14:33 UTC |          |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |          |
	| ssh     | -p kubenet-733905 sudo                               | kubenet-733905 | jenkins | v1.34.0 | 16 Sep 24 14:33 UTC |          |
	|         | cri-dockerd --version                                |                |         |         |                     |          |
	| ssh     | -p kubenet-733905 sudo                               | kubenet-733905 | jenkins | v1.34.0 | 16 Sep 24 14:33 UTC |          |
	|         | systemctl status containerd                          |                |         |         |                     |          |
	|         | --all --full --no-pager                              |                |         |         |                     |          |
	| ssh     | -p kubenet-733905 sudo                               | kubenet-733905 | jenkins | v1.34.0 | 16 Sep 24 14:33 UTC |          |
	|         | systemctl cat containerd                             |                |         |         |                     |          |
	|         | --no-pager                                           |                |         |         |                     |          |
	| ssh     | -p kubenet-733905 sudo cat                           | kubenet-733905 | jenkins | v1.34.0 | 16 Sep 24 14:33 UTC |          |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |          |
	| ssh     | -p kubenet-733905 sudo cat                           | kubenet-733905 | jenkins | v1.34.0 | 16 Sep 24 14:33 UTC |          |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |          |
	| ssh     | -p kubenet-733905 sudo                               | kubenet-733905 | jenkins | v1.34.0 | 16 Sep 24 14:33 UTC |          |
	|         | containerd config dump                               |                |         |         |                     |          |
	| ssh     | -p kubenet-733905 sudo                               | kubenet-733905 | jenkins | v1.34.0 | 16 Sep 24 14:33 UTC |          |
	|         | systemctl status crio --all                          |                |         |         |                     |          |
	|         | --full --no-pager                                    |                |         |         |                     |          |
	| ssh     | -p kubenet-733905 sudo                               | kubenet-733905 | jenkins | v1.34.0 | 16 Sep 24 14:33 UTC |          |
	|         | systemctl cat crio --no-pager                        |                |         |         |                     |          |
	| ssh     | -p kubenet-733905 sudo find                          | kubenet-733905 | jenkins | v1.34.0 | 16 Sep 24 14:33 UTC |          |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |          |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |          |
	| ssh     | -p kubenet-733905 sudo crio                          | kubenet-733905 | jenkins | v1.34.0 | 16 Sep 24 14:33 UTC |          |
	|         | config                                               |                |         |         |                     |          |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 14:32:52
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 14:32:52.432635  764780 out.go:345] Setting OutFile to fd 1 ...
	I0916 14:32:52.432718  764780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 14:32:52.432721  764780 out.go:358] Setting ErrFile to fd 2...
	I0916 14:32:52.432724  764780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 14:32:52.432889  764780 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
	I0916 14:32:52.433392  764780 out.go:352] Setting JSON to false
	I0916 14:32:52.434396  764780 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":15321,"bootTime":1726481851,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 14:32:52.434477  764780 start.go:139] virtualization: kvm guest
	I0916 14:32:52.436646  764780 out.go:177] * [NoKubernetes-772968] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 14:32:52.437840  764780 out.go:177]   - MINIKUBE_LOCATION=19652
	I0916 14:32:52.437897  764780 notify.go:220] Checking for updates...
	I0916 14:32:52.439990  764780 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 14:32:52.441091  764780 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19652-713072/kubeconfig
	I0916 14:32:52.442174  764780 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 14:32:52.443193  764780 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 14:32:52.444154  764780 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 14:32:52.445650  764780 config.go:182] Loaded profile config "NoKubernetes-772968": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0916 14:32:52.446071  764780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 14:32:52.446118  764780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 14:32:52.460895  764780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34115
	I0916 14:32:52.461239  764780 main.go:141] libmachine: () Calling .GetVersion
	I0916 14:32:52.461807  764780 main.go:141] libmachine: Using API Version  1
	I0916 14:32:52.461820  764780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 14:32:52.462203  764780 main.go:141] libmachine: () Calling .GetMachineName
	I0916 14:32:52.462393  764780 main.go:141] libmachine: (NoKubernetes-772968) Calling .DriverName
	I0916 14:32:52.462604  764780 start.go:1780] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I0916 14:32:52.462622  764780 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 14:32:52.462889  764780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 14:32:52.462918  764780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 14:32:52.477401  764780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37605
	I0916 14:32:52.477778  764780 main.go:141] libmachine: () Calling .GetVersion
	I0916 14:32:52.478156  764780 main.go:141] libmachine: Using API Version  1
	I0916 14:32:52.478164  764780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 14:32:52.478511  764780 main.go:141] libmachine: () Calling .GetMachineName
	I0916 14:32:52.478672  764780 main.go:141] libmachine: (NoKubernetes-772968) Calling .DriverName
	I0916 14:32:52.514190  764780 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 14:32:52.515444  764780 start.go:297] selected driver: kvm2
	I0916 14:32:52.515450  764780 start.go:901] validating driver "kvm2" against &{Name:NoKubernetes-772968 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v0.0.0 ClusterName:NoKubernetes-772968 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.157 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 14:32:52.515555  764780 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 14:32:52.515860  764780 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 14:32:52.515914  764780 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19652-713072/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 14:32:52.530303  764780 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 14:32:52.531024  764780 cni.go:84] Creating CNI manager for ""
	I0916 14:32:52.531063  764780 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 14:32:52.531117  764780 start.go:340] cluster config:
	{Name:NoKubernetes-772968 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-772968 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.157 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 14:32:52.531226  764780 iso.go:125] acquiring lock: {Name:mk66d96ffbd424a8ca76a8604dfbe200d58305de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 14:32:52.532880  764780 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-772968
	I0916 14:32:54.374706  764454 start.go:364] duration metric: took 14.221399566s to acquireMachinesLock for "kubernetes-upgrade-515632"
	I0916 14:32:54.374790  764454 start.go:96] Skipping create...Using existing machine configuration
	I0916 14:32:54.374802  764454 fix.go:54] fixHost starting: 
	I0916 14:32:54.375241  764454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 14:32:54.375293  764454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 14:32:54.392197  764454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44083
	I0916 14:32:54.392544  764454 main.go:141] libmachine: () Calling .GetVersion
	I0916 14:32:54.393070  764454 main.go:141] libmachine: Using API Version  1
	I0916 14:32:54.393090  764454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 14:32:54.393444  764454 main.go:141] libmachine: () Calling .GetMachineName
	I0916 14:32:54.393682  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .DriverName
	I0916 14:32:54.393864  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetState
	I0916 14:32:54.395251  764454 fix.go:112] recreateIfNeeded on kubernetes-upgrade-515632: state=Running err=<nil>
	W0916 14:32:54.395277  764454 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 14:32:54.401326  764454 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-515632" VM ...
	I0916 14:32:54.402639  764454 machine.go:93] provisionDockerMachine start ...
	I0916 14:32:54.402678  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .DriverName
	I0916 14:32:54.402883  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHHostname
	I0916 14:32:54.405491  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:32:54.405955  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:32:15 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:32:54.405983  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:32:54.406144  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHPort
	I0916 14:32:54.406350  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:32:54.406528  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:32:54.406663  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHUsername
	I0916 14:32:54.406846  764454 main.go:141] libmachine: Using SSH client type: native
	I0916 14:32:54.407081  764454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0916 14:32:54.407096  764454 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 14:32:54.522507  764454 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-515632
	
	I0916 14:32:54.522541  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetMachineName
	I0916 14:32:54.522802  764454 buildroot.go:166] provisioning hostname "kubernetes-upgrade-515632"
	I0916 14:32:54.522830  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetMachineName
	I0916 14:32:54.523018  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHHostname
	I0916 14:32:54.525778  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:32:54.526170  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:32:15 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:32:54.526206  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:32:54.526294  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHPort
	I0916 14:32:54.526456  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:32:54.526605  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:32:54.526769  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHUsername
	I0916 14:32:54.526955  764454 main.go:141] libmachine: Using SSH client type: native
	I0916 14:32:54.527190  764454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0916 14:32:54.527213  764454 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-515632 && echo "kubernetes-upgrade-515632" | sudo tee /etc/hostname
	I0916 14:32:54.669863  764454 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-515632
	
	I0916 14:32:54.669892  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHHostname
	I0916 14:32:54.672437  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:32:54.672808  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:32:15 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:32:54.672849  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:32:54.673032  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHPort
	I0916 14:32:54.673206  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:32:54.673351  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:32:54.673453  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHUsername
	I0916 14:32:54.673643  764454 main.go:141] libmachine: Using SSH client type: native
	I0916 14:32:54.673856  764454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0916 14:32:54.673874  764454 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-515632' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-515632/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-515632' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 14:32:54.787785  764454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 14:32:54.787822  764454 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19652-713072/.minikube CaCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19652-713072/.minikube}
	I0916 14:32:54.787851  764454 buildroot.go:174] setting up certificates
	I0916 14:32:54.787870  764454 provision.go:84] configureAuth start
	I0916 14:32:54.787886  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetMachineName
	I0916 14:32:54.788161  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetIP
	I0916 14:32:54.790842  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:32:54.791277  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:32:15 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:32:54.791317  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:32:54.791429  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHHostname
	I0916 14:32:54.793614  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:32:54.794097  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:32:15 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:32:54.794129  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:32:54.794253  764454 provision.go:143] copyHostCerts
	I0916 14:32:54.794307  764454 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem, removing ...
	I0916 14:32:54.794317  764454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem
	I0916 14:32:54.794372  764454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/ca.pem (1082 bytes)
	I0916 14:32:54.794470  764454 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem, removing ...
	I0916 14:32:54.794478  764454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem
	I0916 14:32:54.794498  764454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/cert.pem (1123 bytes)
	I0916 14:32:54.794567  764454 exec_runner.go:144] found /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem, removing ...
	I0916 14:32:54.794575  764454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem
	I0916 14:32:54.794594  764454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19652-713072/.minikube/key.pem (1679 bytes)
	I0916 14:32:54.794657  764454 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-515632 san=[127.0.0.1 192.168.39.161 kubernetes-upgrade-515632 localhost minikube]
	I0916 14:32:54.928148  764454 provision.go:177] copyRemoteCerts
	I0916 14:32:54.928206  764454 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 14:32:54.928240  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHHostname
	I0916 14:32:54.931274  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:32:54.931676  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:32:15 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:32:54.931706  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:32:54.931866  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHPort
	I0916 14:32:54.932050  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:32:54.932186  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHUsername
	I0916 14:32:54.932340  764454 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/kubernetes-upgrade-515632/id_rsa Username:docker}
	I0916 14:32:55.020092  764454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0916 14:32:55.044690  764454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 14:32:52.534289  764780 preload.go:131] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0916 14:32:52.561319  764780 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0916 14:32:52.561436  764780 profile.go:143] Saving config to /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/NoKubernetes-772968/config.json ...
	I0916 14:32:52.561641  764780 start.go:360] acquireMachinesLock for NoKubernetes-772968: {Name:mke8f8f8ba61009cdea7a3d88b50b9f6ae6e1362 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 14:32:54.141319  764229 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 14:32:54.141346  764229 machine.go:96] duration metric: took 6.108911562s to provisionDockerMachine
	I0916 14:32:54.141360  764229 start.go:293] postStartSetup for "pause-563108" (driver="kvm2")
	I0916 14:32:54.141400  764229 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 14:32:54.141433  764229 main.go:141] libmachine: (pause-563108) Calling .DriverName
	I0916 14:32:54.141798  764229 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 14:32:54.141829  764229 main.go:141] libmachine: (pause-563108) Calling .GetSSHHostname
	I0916 14:32:54.144607  764229 main.go:141] libmachine: (pause-563108) DBG | domain pause-563108 has defined MAC address 52:54:00:48:18:b9 in network mk-pause-563108
	I0916 14:32:54.145065  764229 main.go:141] libmachine: (pause-563108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:18:b9", ip: ""} in network mk-pause-563108: {Iface:virbr2 ExpiryTime:2024-09-16 15:31:45 +0000 UTC Type:0 Mac:52:54:00:48:18:b9 Iaid: IPaddr:192.168.83.201 Prefix:24 Hostname:pause-563108 Clientid:01:52:54:00:48:18:b9}
	I0916 14:32:54.145094  764229 main.go:141] libmachine: (pause-563108) DBG | domain pause-563108 has defined IP address 192.168.83.201 and MAC address 52:54:00:48:18:b9 in network mk-pause-563108
	I0916 14:32:54.145212  764229 main.go:141] libmachine: (pause-563108) Calling .GetSSHPort
	I0916 14:32:54.145403  764229 main.go:141] libmachine: (pause-563108) Calling .GetSSHKeyPath
	I0916 14:32:54.145600  764229 main.go:141] libmachine: (pause-563108) Calling .GetSSHUsername
	I0916 14:32:54.145794  764229 sshutil.go:53] new ssh client: &{IP:192.168.83.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/pause-563108/id_rsa Username:docker}
	I0916 14:32:54.228540  764229 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 14:32:54.233220  764229 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 14:32:54.233251  764229 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/addons for local assets ...
	I0916 14:32:54.233323  764229 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/files for local assets ...
	I0916 14:32:54.233420  764229 filesync.go:149] local asset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> 7205442.pem in /etc/ssl/certs
	I0916 14:32:54.233561  764229 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 14:32:54.243549  764229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 14:32:54.268599  764229 start.go:296] duration metric: took 127.214978ms for postStartSetup
	I0916 14:32:54.268659  764229 fix.go:56] duration metric: took 6.262481488s for fixHost
	I0916 14:32:54.268688  764229 main.go:141] libmachine: (pause-563108) Calling .GetSSHHostname
	I0916 14:32:54.271461  764229 main.go:141] libmachine: (pause-563108) DBG | domain pause-563108 has defined MAC address 52:54:00:48:18:b9 in network mk-pause-563108
	I0916 14:32:54.271846  764229 main.go:141] libmachine: (pause-563108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:18:b9", ip: ""} in network mk-pause-563108: {Iface:virbr2 ExpiryTime:2024-09-16 15:31:45 +0000 UTC Type:0 Mac:52:54:00:48:18:b9 Iaid: IPaddr:192.168.83.201 Prefix:24 Hostname:pause-563108 Clientid:01:52:54:00:48:18:b9}
	I0916 14:32:54.271887  764229 main.go:141] libmachine: (pause-563108) DBG | domain pause-563108 has defined IP address 192.168.83.201 and MAC address 52:54:00:48:18:b9 in network mk-pause-563108
	I0916 14:32:54.272048  764229 main.go:141] libmachine: (pause-563108) Calling .GetSSHPort
	I0916 14:32:54.272245  764229 main.go:141] libmachine: (pause-563108) Calling .GetSSHKeyPath
	I0916 14:32:54.272370  764229 main.go:141] libmachine: (pause-563108) Calling .GetSSHKeyPath
	I0916 14:32:54.272501  764229 main.go:141] libmachine: (pause-563108) Calling .GetSSHUsername
	I0916 14:32:54.272690  764229 main.go:141] libmachine: Using SSH client type: native
	I0916 14:32:54.272880  764229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.201 22 <nil> <nil>}
	I0916 14:32:54.272893  764229 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 14:32:54.374553  764229 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726497174.364636625
	
	I0916 14:32:54.374580  764229 fix.go:216] guest clock: 1726497174.364636625
	I0916 14:32:54.374588  764229 fix.go:229] Guest: 2024-09-16 14:32:54.364636625 +0000 UTC Remote: 2024-09-16 14:32:54.26866529 +0000 UTC m=+30.417896173 (delta=95.971335ms)
	I0916 14:32:54.374610  764229 fix.go:200] guest clock delta is within tolerance: 95.971335ms
	I0916 14:32:54.374615  764229 start.go:83] releasing machines lock for "pause-563108", held for 6.368488501s
	I0916 14:32:54.374638  764229 main.go:141] libmachine: (pause-563108) Calling .DriverName
	I0916 14:32:54.374916  764229 main.go:141] libmachine: (pause-563108) Calling .GetIP
	I0916 14:32:54.377861  764229 main.go:141] libmachine: (pause-563108) DBG | domain pause-563108 has defined MAC address 52:54:00:48:18:b9 in network mk-pause-563108
	I0916 14:32:54.378300  764229 main.go:141] libmachine: (pause-563108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:18:b9", ip: ""} in network mk-pause-563108: {Iface:virbr2 ExpiryTime:2024-09-16 15:31:45 +0000 UTC Type:0 Mac:52:54:00:48:18:b9 Iaid: IPaddr:192.168.83.201 Prefix:24 Hostname:pause-563108 Clientid:01:52:54:00:48:18:b9}
	I0916 14:32:54.378326  764229 main.go:141] libmachine: (pause-563108) DBG | domain pause-563108 has defined IP address 192.168.83.201 and MAC address 52:54:00:48:18:b9 in network mk-pause-563108
	I0916 14:32:54.378636  764229 main.go:141] libmachine: (pause-563108) Calling .DriverName
	I0916 14:32:54.379147  764229 main.go:141] libmachine: (pause-563108) Calling .DriverName
	I0916 14:32:54.379329  764229 main.go:141] libmachine: (pause-563108) Calling .DriverName
	I0916 14:32:54.379450  764229 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 14:32:54.379497  764229 main.go:141] libmachine: (pause-563108) Calling .GetSSHHostname
	I0916 14:32:54.379512  764229 ssh_runner.go:195] Run: cat /version.json
	I0916 14:32:54.379545  764229 main.go:141] libmachine: (pause-563108) Calling .GetSSHHostname
	I0916 14:32:54.382593  764229 main.go:141] libmachine: (pause-563108) DBG | domain pause-563108 has defined MAC address 52:54:00:48:18:b9 in network mk-pause-563108
	I0916 14:32:54.382620  764229 main.go:141] libmachine: (pause-563108) DBG | domain pause-563108 has defined MAC address 52:54:00:48:18:b9 in network mk-pause-563108
	I0916 14:32:54.383018  764229 main.go:141] libmachine: (pause-563108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:18:b9", ip: ""} in network mk-pause-563108: {Iface:virbr2 ExpiryTime:2024-09-16 15:31:45 +0000 UTC Type:0 Mac:52:54:00:48:18:b9 Iaid: IPaddr:192.168.83.201 Prefix:24 Hostname:pause-563108 Clientid:01:52:54:00:48:18:b9}
	I0916 14:32:54.383050  764229 main.go:141] libmachine: (pause-563108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:18:b9", ip: ""} in network mk-pause-563108: {Iface:virbr2 ExpiryTime:2024-09-16 15:31:45 +0000 UTC Type:0 Mac:52:54:00:48:18:b9 Iaid: IPaddr:192.168.83.201 Prefix:24 Hostname:pause-563108 Clientid:01:52:54:00:48:18:b9}
	I0916 14:32:54.383073  764229 main.go:141] libmachine: (pause-563108) DBG | domain pause-563108 has defined IP address 192.168.83.201 and MAC address 52:54:00:48:18:b9 in network mk-pause-563108
	I0916 14:32:54.383151  764229 main.go:141] libmachine: (pause-563108) DBG | domain pause-563108 has defined IP address 192.168.83.201 and MAC address 52:54:00:48:18:b9 in network mk-pause-563108
	I0916 14:32:54.383271  764229 main.go:141] libmachine: (pause-563108) Calling .GetSSHPort
	I0916 14:32:54.383370  764229 main.go:141] libmachine: (pause-563108) Calling .GetSSHPort
	I0916 14:32:54.383473  764229 main.go:141] libmachine: (pause-563108) Calling .GetSSHKeyPath
	I0916 14:32:54.383489  764229 main.go:141] libmachine: (pause-563108) Calling .GetSSHKeyPath
	I0916 14:32:54.383648  764229 main.go:141] libmachine: (pause-563108) Calling .GetSSHUsername
	I0916 14:32:54.383686  764229 main.go:141] libmachine: (pause-563108) Calling .GetSSHUsername
	I0916 14:32:54.383842  764229 sshutil.go:53] new ssh client: &{IP:192.168.83.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/pause-563108/id_rsa Username:docker}
	I0916 14:32:54.383852  764229 sshutil.go:53] new ssh client: &{IP:192.168.83.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/pause-563108/id_rsa Username:docker}
	I0916 14:32:54.466898  764229 ssh_runner.go:195] Run: systemctl --version
	I0916 14:32:54.487400  764229 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 14:32:54.643898  764229 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 14:32:54.650123  764229 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 14:32:54.650192  764229 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 14:32:54.662020  764229 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 14:32:54.662048  764229 start.go:495] detecting cgroup driver to use...
	I0916 14:32:54.662143  764229 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 14:32:54.683161  764229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 14:32:54.698681  764229 docker.go:217] disabling cri-docker service (if available) ...
	I0916 14:32:54.698753  764229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 14:32:54.714543  764229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 14:32:54.729386  764229 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 14:32:54.868793  764229 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 14:32:54.999123  764229 docker.go:233] disabling docker service ...
	I0916 14:32:54.999194  764229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 14:32:55.016584  764229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 14:32:55.032316  764229 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 14:32:55.194153  764229 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 14:32:55.360524  764229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 14:32:55.377100  764229 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 14:32:55.397085  764229 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 14:32:55.397156  764229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:32:55.407565  764229 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 14:32:55.407638  764229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:32:55.417553  764229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:32:55.427230  764229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:32:55.436986  764229 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 14:32:55.447077  764229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:32:55.456771  764229 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:32:55.467367  764229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:32:55.478559  764229 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 14:32:55.488016  764229 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 14:32:55.497142  764229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 14:32:55.626569  764229 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 14:32:55.068557  764454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 14:32:55.093123  764454 provision.go:87] duration metric: took 305.237581ms to configureAuth
	I0916 14:32:55.093148  764454 buildroot.go:189] setting minikube options for container-runtime
	I0916 14:32:55.093346  764454 config.go:182] Loaded profile config "kubernetes-upgrade-515632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 14:32:55.093433  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHHostname
	I0916 14:32:55.096466  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:32:55.096824  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:32:15 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:32:55.096863  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:32:55.097123  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHPort
	I0916 14:32:55.097312  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:32:55.097455  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:32:55.097598  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHUsername
	I0916 14:32:55.097737  764454 main.go:141] libmachine: Using SSH client type: native
	I0916 14:32:55.097912  764454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0916 14:32:55.097928  764454 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 14:33:01.254492  764780 start.go:364] duration metric: took 8.69281099s to acquireMachinesLock for "NoKubernetes-772968"
	I0916 14:33:01.254534  764780 start.go:96] Skipping create...Using existing machine configuration
	I0916 14:33:01.254539  764780 fix.go:54] fixHost starting: 
	I0916 14:33:01.255008  764780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 14:33:01.255055  764780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 14:33:01.271493  764780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37093
	I0916 14:33:01.271845  764780 main.go:141] libmachine: () Calling .GetVersion
	I0916 14:33:01.272337  764780 main.go:141] libmachine: Using API Version  1
	I0916 14:33:01.272356  764780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 14:33:01.272697  764780 main.go:141] libmachine: () Calling .GetMachineName
	I0916 14:33:01.272878  764780 main.go:141] libmachine: (NoKubernetes-772968) Calling .DriverName
	I0916 14:33:01.273012  764780 main.go:141] libmachine: (NoKubernetes-772968) Calling .GetState
	I0916 14:33:01.274401  764780 fix.go:112] recreateIfNeeded on NoKubernetes-772968: state=Stopped err=<nil>
	I0916 14:33:01.274420  764780 main.go:141] libmachine: (NoKubernetes-772968) Calling .DriverName
	W0916 14:33:01.274548  764780 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 14:33:01.276255  764780 out.go:177] * Restarting existing kvm2 VM for "NoKubernetes-772968" ...
	I0916 14:33:01.277686  764780 main.go:141] libmachine: (NoKubernetes-772968) Calling .Start
	I0916 14:33:01.277836  764780 main.go:141] libmachine: (NoKubernetes-772968) Ensuring networks are active...
	I0916 14:33:01.278559  764780 main.go:141] libmachine: (NoKubernetes-772968) Ensuring network default is active
	I0916 14:33:01.278912  764780 main.go:141] libmachine: (NoKubernetes-772968) Ensuring network mk-NoKubernetes-772968 is active
	I0916 14:33:01.279301  764780 main.go:141] libmachine: (NoKubernetes-772968) Getting domain xml...
	I0916 14:33:01.280007  764780 main.go:141] libmachine: (NoKubernetes-772968) Creating domain...
	I0916 14:33:02.421434  764229 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.794828203s)
	I0916 14:33:02.421467  764229 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 14:33:02.421530  764229 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 14:33:02.426916  764229 start.go:563] Will wait 60s for crictl version
	I0916 14:33:02.426973  764229 ssh_runner.go:195] Run: which crictl
	I0916 14:33:02.432096  764229 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 14:33:02.485345  764229 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 14:33:02.485437  764229 ssh_runner.go:195] Run: crio --version
	I0916 14:33:02.520331  764229 ssh_runner.go:195] Run: crio --version
	I0916 14:33:02.555025  764229 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 14:33:01.010928  764454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 14:33:01.010954  764454 machine.go:96] duration metric: took 6.608295744s to provisionDockerMachine
	I0916 14:33:01.010968  764454 start.go:293] postStartSetup for "kubernetes-upgrade-515632" (driver="kvm2")
	I0916 14:33:01.010993  764454 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 14:33:01.011016  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .DriverName
	I0916 14:33:01.011366  764454 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 14:33:01.011404  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHHostname
	I0916 14:33:01.014133  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:33:01.014438  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:32:15 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:33:01.014469  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:33:01.014662  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHPort
	I0916 14:33:01.014834  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:33:01.014993  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHUsername
	I0916 14:33:01.015126  764454 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/kubernetes-upgrade-515632/id_rsa Username:docker}
	I0916 14:33:01.099833  764454 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 14:33:01.104321  764454 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 14:33:01.104346  764454 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/addons for local assets ...
	I0916 14:33:01.104409  764454 filesync.go:126] Scanning /home/jenkins/minikube-integration/19652-713072/.minikube/files for local assets ...
	I0916 14:33:01.104481  764454 filesync.go:149] local asset: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem -> 7205442.pem in /etc/ssl/certs
	I0916 14:33:01.104568  764454 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 14:33:01.113701  764454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 14:33:01.137203  764454 start.go:296] duration metric: took 126.218865ms for postStartSetup
	I0916 14:33:01.137244  764454 fix.go:56] duration metric: took 6.762443195s for fixHost
	I0916 14:33:01.137266  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHHostname
	I0916 14:33:01.139965  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:33:01.140315  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:32:15 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:33:01.140344  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:33:01.140466  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHPort
	I0916 14:33:01.140658  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:33:01.140825  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:33:01.140968  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHUsername
	I0916 14:33:01.141122  764454 main.go:141] libmachine: Using SSH client type: native
	I0916 14:33:01.141310  764454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0916 14:33:01.141323  764454 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 14:33:01.254297  764454 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726497181.242534371
	
	I0916 14:33:01.254324  764454 fix.go:216] guest clock: 1726497181.242534371
	I0916 14:33:01.254331  764454 fix.go:229] Guest: 2024-09-16 14:33:01.242534371 +0000 UTC Remote: 2024-09-16 14:33:01.137248304 +0000 UTC m=+21.123636018 (delta=105.286067ms)
	I0916 14:33:01.254351  764454 fix.go:200] guest clock delta is within tolerance: 105.286067ms
	I0916 14:33:01.254370  764454 start.go:83] releasing machines lock for "kubernetes-upgrade-515632", held for 6.879622312s
	I0916 14:33:01.254397  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .DriverName
	I0916 14:33:01.254661  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetIP
	I0916 14:33:01.257222  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:33:01.257554  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:32:15 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:33:01.257603  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:33:01.257744  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .DriverName
	I0916 14:33:01.258276  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .DriverName
	I0916 14:33:01.258459  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .DriverName
	I0916 14:33:01.258557  764454 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 14:33:01.258618  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHHostname
	I0916 14:33:01.258667  764454 ssh_runner.go:195] Run: cat /version.json
	I0916 14:33:01.258689  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHHostname
	I0916 14:33:01.261305  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:33:01.261530  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:33:01.261651  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:32:15 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:33:01.261693  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:33:01.261824  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHPort
	I0916 14:33:01.261934  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:32:15 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:33:01.261979  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:33:01.261981  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:33:01.262113  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHUsername
	I0916 14:33:01.262135  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHPort
	I0916 14:33:01.262262  764454 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/kubernetes-upgrade-515632/id_rsa Username:docker}
	I0916 14:33:01.262290  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHKeyPath
	I0916 14:33:01.262624  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetSSHUsername
	I0916 14:33:01.262763  764454 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/kubernetes-upgrade-515632/id_rsa Username:docker}
	I0916 14:33:01.364873  764454 ssh_runner.go:195] Run: systemctl --version
	I0916 14:33:01.370683  764454 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 14:33:01.525830  764454 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 14:33:01.533163  764454 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 14:33:01.533235  764454 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 14:33:01.545717  764454 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 14:33:01.545739  764454 start.go:495] detecting cgroup driver to use...
	I0916 14:33:01.545793  764454 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 14:33:01.566267  764454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 14:33:01.580312  764454 docker.go:217] disabling cri-docker service (if available) ...
	I0916 14:33:01.580372  764454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 14:33:01.596805  764454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 14:33:01.614404  764454 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 14:33:01.798484  764454 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 14:33:01.996792  764454 docker.go:233] disabling docker service ...
	I0916 14:33:01.996881  764454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 14:33:02.017617  764454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 14:33:02.033401  764454 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 14:33:02.193548  764454 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 14:33:02.382836  764454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 14:33:02.399794  764454 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 14:33:02.424408  764454 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 14:33:02.424471  764454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:33:02.440557  764454 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 14:33:02.440613  764454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:33:02.458587  764454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:33:02.473940  764454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:33:02.486318  764454 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 14:33:02.502292  764454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:33:02.514448  764454 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:33:02.526843  764454 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 14:33:02.539627  764454 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 14:33:02.555838  764454 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 14:33:02.572749  764454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 14:33:02.750573  764454 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 14:33:03.064080  764454 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 14:33:03.064161  764454 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 14:33:03.069414  764454 start.go:563] Will wait 60s for crictl version
	I0916 14:33:03.069479  764454 ssh_runner.go:195] Run: which crictl
	I0916 14:33:03.073810  764454 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 14:33:03.125090  764454 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 14:33:03.125181  764454 ssh_runner.go:195] Run: crio --version
	I0916 14:33:03.166619  764454 ssh_runner.go:195] Run: crio --version
	I0916 14:33:03.205350  764454 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 14:33:02.556610  764229 main.go:141] libmachine: (pause-563108) Calling .GetIP
	I0916 14:33:02.559452  764229 main.go:141] libmachine: (pause-563108) DBG | domain pause-563108 has defined MAC address 52:54:00:48:18:b9 in network mk-pause-563108
	I0916 14:33:02.559805  764229 main.go:141] libmachine: (pause-563108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:18:b9", ip: ""} in network mk-pause-563108: {Iface:virbr2 ExpiryTime:2024-09-16 15:31:45 +0000 UTC Type:0 Mac:52:54:00:48:18:b9 Iaid: IPaddr:192.168.83.201 Prefix:24 Hostname:pause-563108 Clientid:01:52:54:00:48:18:b9}
	I0916 14:33:02.559845  764229 main.go:141] libmachine: (pause-563108) DBG | domain pause-563108 has defined IP address 192.168.83.201 and MAC address 52:54:00:48:18:b9 in network mk-pause-563108
	I0916 14:33:02.560073  764229 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0916 14:33:02.564603  764229 kubeadm.go:883] updating cluster {Name:pause-563108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:pause-563108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.201 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 14:33:02.564760  764229 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 14:33:02.564828  764229 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 14:33:02.622995  764229 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 14:33:02.623027  764229 crio.go:433] Images already preloaded, skipping extraction
	I0916 14:33:02.623103  764229 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 14:33:02.664195  764229 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 14:33:02.664228  764229 cache_images.go:84] Images are preloaded, skipping loading
	I0916 14:33:02.664240  764229 kubeadm.go:934] updating node { 192.168.83.201 8443 v1.31.1 crio true true} ...
	I0916 14:33:02.664388  764229 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-563108 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:pause-563108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 14:33:02.664478  764229 ssh_runner.go:195] Run: crio config
	I0916 14:33:02.717337  764229 cni.go:84] Creating CNI manager for ""
	I0916 14:33:02.717368  764229 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 14:33:02.717383  764229 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 14:33:02.717417  764229 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.201 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-563108 NodeName:pause-563108 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.201"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.201 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 14:33:02.717577  764229 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.201
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-563108"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.201
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.201"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 14:33:02.717644  764229 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 14:33:02.729595  764229 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 14:33:02.729688  764229 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 14:33:02.741630  764229 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0916 14:33:02.763051  764229 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 14:33:02.783478  764229 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0916 14:33:02.802890  764229 ssh_runner.go:195] Run: grep 192.168.83.201	control-plane.minikube.internal$ /etc/hosts
	I0916 14:33:02.807274  764229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 14:33:02.961452  764229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 14:33:02.979156  764229 certs.go:68] Setting up /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/pause-563108 for IP: 192.168.83.201
	I0916 14:33:02.979180  764229 certs.go:194] generating shared ca certs ...
	I0916 14:33:02.979200  764229 certs.go:226] acquiring lock for ca certs: {Name:mk25b35916ff3ff3777938e3e2b7794965f8a707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 14:33:02.979403  764229 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key
	I0916 14:33:02.979465  764229 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key
	I0916 14:33:02.979476  764229 certs.go:256] generating profile certs ...
	I0916 14:33:02.979614  764229 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/pause-563108/client.key
	I0916 14:33:02.979701  764229 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/pause-563108/apiserver.key.a898bc00
	I0916 14:33:02.979748  764229 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/pause-563108/proxy-client.key
	I0916 14:33:02.979902  764229 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem (1338 bytes)
	W0916 14:33:02.979944  764229 certs.go:480] ignoring /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544_empty.pem, impossibly tiny 0 bytes
	I0916 14:33:02.979959  764229 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 14:33:02.980000  764229 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem (1082 bytes)
	I0916 14:33:02.980033  764229 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem (1123 bytes)
	I0916 14:33:02.980517  764229 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem (1679 bytes)
	I0916 14:33:02.980618  764229 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 14:33:02.982416  764229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 14:33:03.015252  764229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 14:33:03.044725  764229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 14:33:03.073985  764229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 14:33:03.105810  764229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/pause-563108/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 14:33:03.139050  764229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/pause-563108/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 14:33:03.166175  764229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/pause-563108/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 14:33:03.198778  764229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/pause-563108/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 14:33:03.228372  764229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 14:33:03.260081  764229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem --> /usr/share/ca-certificates/720544.pem (1338 bytes)
	I0916 14:33:03.288055  764229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /usr/share/ca-certificates/7205442.pem (1708 bytes)
	I0916 14:33:03.315573  764229 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 14:33:03.334403  764229 ssh_runner.go:195] Run: openssl version
	I0916 14:33:03.342163  764229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 14:33:03.356687  764229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 14:33:03.362820  764229 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 12:53 /usr/share/ca-certificates/minikubeCA.pem
	I0916 14:33:03.362885  764229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 14:33:03.369185  764229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 14:33:03.381103  764229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/720544.pem && ln -fs /usr/share/ca-certificates/720544.pem /etc/ssl/certs/720544.pem"
	I0916 14:33:03.394815  764229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/720544.pem
	I0916 14:33:03.400060  764229 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 13:33 /usr/share/ca-certificates/720544.pem
	I0916 14:33:03.400120  764229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/720544.pem
	I0916 14:33:03.407332  764229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/720544.pem /etc/ssl/certs/51391683.0"
	I0916 14:33:03.417933  764229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7205442.pem && ln -fs /usr/share/ca-certificates/7205442.pem /etc/ssl/certs/7205442.pem"
	I0916 14:33:03.430257  764229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7205442.pem
	I0916 14:33:03.436414  764229 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 13:33 /usr/share/ca-certificates/7205442.pem
	I0916 14:33:03.436487  764229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7205442.pem
	I0916 14:33:03.444582  764229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7205442.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 14:33:03.455677  764229 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 14:33:03.462414  764229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 14:33:03.470888  764229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 14:33:03.478004  764229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 14:33:03.485525  764229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 14:33:03.492182  764229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 14:33:03.498539  764229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 14:33:03.505454  764229 kubeadm.go:392] StartCluster: {Name:pause-563108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:pause-563108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.201 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 14:33:03.505589  764229 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 14:33:03.505641  764229 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 14:33:03.554268  764229 cri.go:89] found id: "4d20a78c07ee7880fb1a48f3de83e906af3b08861155f4a0fffee82451989a13"
	I0916 14:33:03.554299  764229 cri.go:89] found id: "cd31b4748a1d79114320dad810ea59142fff9a90f0596b350f6171e329c6f6de"
	I0916 14:33:03.554306  764229 cri.go:89] found id: "ebb62b7f5e7b4c1ee98cd91d85f0a9e9ae539930d9feb388df8678c79de3b0e8"
	I0916 14:33:03.554311  764229 cri.go:89] found id: "da3149c366d5973346a298fd370e412d721744b0aa9091d575a09bc8a9badcc6"
	I0916 14:33:03.554315  764229 cri.go:89] found id: "8c43bb36bae3bfd7341dcc7b27fed09dec946c549d0acda06fdc16825aaece90"
	I0916 14:33:03.554321  764229 cri.go:89] found id: "3089feae8ecf2b925e5827a983491ce5e2f1e229bb002fd949fe3d41a368196c"
	I0916 14:33:03.554325  764229 cri.go:89] found id: ""
	I0916 14:33:03.554381  764229 ssh_runner.go:195] Run: sudo runc list -f json
	I0916 14:33:03.583875  764229 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"086835dbe4ffa7acd3e324d53a76b813810ecacace85c45c879283d68a49e53d","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/086835dbe4ffa7acd3e324d53a76b813810ecacace85c45c879283d68a49e53d/userdata","rootfs":"/var/lib/containers/storage/overlay/0784434a67f65d06c7d1eb668b8336523f1e5aad7730d3185c10a57f36fe39d2/merged","created":"2024-09-16T14:32:06.505407865Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-09-16T14:32:05.925368938Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"c9bc0998c6f0845cd046cf5d40389540\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/podc9bc0998c6f0845cd046cf5d40389540","io.kubernetes.cri-o.ContainerID":"086835dbe4ffa7acd3e324d53a76b813810ecacace85c45c879283d68a49e53d","io.kubernetes.cri-o.Con
tainerName":"k8s_POD_kube-controller-manager-pause-563108_kube-system_c9bc0998c6f0845cd046cf5d40389540_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-09-16T14:32:06.403587848Z","io.kubernetes.cri-o.HostName":"pause-563108","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/086835dbe4ffa7acd3e324d53a76b813810ecacace85c45c879283d68a49e53d/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"kube-controller-manager-pause-563108","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"c9bc0998c6f0845cd046cf5d40389540\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-563108\",\"component\":\"kube-controller-manager\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/
kube-system_kube-controller-manager-pause-563108_c9bc0998c6f0845cd046cf5d40389540/086835dbe4ffa7acd3e324d53a76b813810ecacace85c45c879283d68a49e53d.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pause-563108\",\"uid\":\"c9bc0998c6f0845cd046cf5d40389540\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0784434a67f65d06c7d1eb668b8336523f1e5aad7730d3185c10a57f36fe39d2/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-563108_kube-system_c9bc0998c6f0845cd046cf5d40389540_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":204,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/0
86835dbe4ffa7acd3e324d53a76b813810ecacace85c45c879283d68a49e53d/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"086835dbe4ffa7acd3e324d53a76b813810ecacace85c45c879283d68a49e53d","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-563108_kube-system_c9bc0998c6f0845cd046cf5d40389540_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/086835dbe4ffa7acd3e324d53a76b813810ecacace85c45c879283d68a49e53d/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-pause-563108","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"c9bc0998c6f0845cd046cf5d40389540","kubernetes.io/config.hash":"c9bc0998c6f0845cd046cf5d40389540","kubernetes.io/config.seen":"2024-09-16T14:32:05.925368938Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3089feae8ecf2b925e5827a983491ce5e2f1e229bb002fd949fe3d41a368196c
","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/3089feae8ecf2b925e5827a983491ce5e2f1e229bb002fd949fe3d41a368196c/userdata","rootfs":"/var/lib/containers/storage/overlay/fc02da8a17d8ba2ef11f417b772bb56487007519094e99a7902fe32d5c609d45/merged","created":"2024-09-16T14:32:06.662081628Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7df2713b","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7df2713b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3089feae8ecf2b925e5827a983491ce5e2f1e229
bb002fd949fe3d41a368196c","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T14:32:06.583714891Z","io.kubernetes.cri-o.Image":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.31.1","io.kubernetes.cri-o.ImageRef":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-563108\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"1906d0a688f8b3198653a04f351077f3\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-563108_1906d0a688f8b3198653a04f351077f3/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/fc02da8a17d8ba2ef11f417b772bb56487007519094e99a7902fe32d5c609d45/merged","io.kubernetes.cri-o
.Name":"k8s_kube-apiserver_kube-apiserver-pause-563108_kube-system_1906d0a688f8b3198653a04f351077f3_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/37fb8197d1ad80a01c7f926271bad00904e8b068430647cf339070ce16c7c56e/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"37fb8197d1ad80a01c7f926271bad00904e8b068430647cf339070ce16c7c56e","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-563108_kube-system_1906d0a688f8b3198653a04f351077f3_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/1906d0a688f8b3198653a04f351077f3/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/1906d0a688f8b3198653a04f35107
7f3/containers/kube-apiserver/bc78175d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-563108","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"1906d0a688f8b3198653a04f351077f3","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.83.201:8443","kubernetes.io/config.hash":"1906d0a688f8b3198653a04f351077f3","kubernetes.io/config.seen":"2024-09-16T14:32:05.925367381Z","kubernetes.io/config.source":"file"},"owner":"root"
},{"ociVersion":"1.0.2-dev","id":"37fb8197d1ad80a01c7f926271bad00904e8b068430647cf339070ce16c7c56e","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/37fb8197d1ad80a01c7f926271bad00904e8b068430647cf339070ce16c7c56e/userdata","rootfs":"/var/lib/containers/storage/overlay/c99bc415804e9b7c6e96325218969b2eb2644252a22254cff5aa79500d4ad010/merged","created":"2024-09-16T14:32:06.482591147Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-09-16T14:32:05.925367381Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"1906d0a688f8b3198653a04f351077f3\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.83.201:8443\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod1906d0a688f8b3198653a04f351077f3","io.kubernetes.cri-o.ContainerID":"37fb8197d1ad80a01c7f926271bad00904e8b068430647cf339070ce16c7c56e
","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-563108_kube-system_1906d0a688f8b3198653a04f351077f3_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-09-16T14:32:06.388677215Z","io.kubernetes.cri-o.HostName":"pause-563108","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/37fb8197d1ad80a01c7f926271bad00904e8b068430647cf339070ce16c7c56e/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-563108","io.kubernetes.cri-o.Labels":"{\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.pod.uid\":\"1906d0a688f8b3198653a04f351077f3\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-563108\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-syste
m_kube-apiserver-pause-563108_1906d0a688f8b3198653a04f351077f3/37fb8197d1ad80a01c7f926271bad00904e8b068430647cf339070ce16c7c56e.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-563108\",\"uid\":\"1906d0a688f8b3198653a04f351077f3\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c99bc415804e9b7c6e96325218969b2eb2644252a22254cff5aa79500d4ad010/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-pause-563108_kube-system_1906d0a688f8b3198653a04f351077f3_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":256,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/37fb8197d1ad80a01c7f926271bad00904e8b0
68430647cf339070ce16c7c56e/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"37fb8197d1ad80a01c7f926271bad00904e8b068430647cf339070ce16c7c56e","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-563108_kube-system_1906d0a688f8b3198653a04f351077f3_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/37fb8197d1ad80a01c7f926271bad00904e8b068430647cf339070ce16c7c56e/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-563108","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"1906d0a688f8b3198653a04f351077f3","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.83.201:8443","kubernetes.io/config.hash":"1906d0a688f8b3198653a04f351077f3","kubernetes.io/config.seen":"2024-09-16T14:32:05.925367381Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4d20a78c07ee7880fb1a48f3de83e90
6af3b08861155f4a0fffee82451989a13","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/4d20a78c07ee7880fb1a48f3de83e906af3b08861155f4a0fffee82451989a13/userdata","rootfs":"/var/lib/containers/storage/overlay/c4e8ac4e3e41b7ae9a7789ca9f7926942d0014df191d34ccc9de6c8b801c95b0/merged","created":"2024-09-16T14:32:19.268243465Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"2a3a204d","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"2a3a204d\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\
\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"4d20a78c07ee7880fb1a48f3de83e906af3b08861155f4a0fffee82451989a13","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T14:32:19.229544056Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.11.3","io.kubernetes.cri-o.ImageRef":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","io.kubernetes.cri-o.Labels":"{\
"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-7c65d6cfc9-m77lx\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"e0af3abe-e7b3-4576-9a3d-2299568d8cab\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-7c65d6cfc9-m77lx_e0af3abe-e7b3-4576-9a3d-2299568d8cab/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c4e8ac4e3e41b7ae9a7789ca9f7926942d0014df191d34ccc9de6c8b801c95b0/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-7c65d6cfc9-m77lx_kube-system_e0af3abe-e7b3-4576-9a3d-2299568d8cab_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e1819e1408140023cb29c60674254071fa4f5e72cfd91b1a71074ff0976656c0/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e1819e1408140023cb29c60674254071fa4f5e72cfd91b1a71074ff0976656c0","io.kubernetes.cri-o.SandboxName":"k8s_coredn
s-7c65d6cfc9-m77lx_kube-system_e0af3abe-e7b3-4576-9a3d-2299568d8cab_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/e0af3abe-e7b3-4576-9a3d-2299568d8cab/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e0af3abe-e7b3-4576-9a3d-2299568d8cab/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e0af3abe-e7b3-4576-9a3d-2299568d8cab/containers/coredns/9032b5fd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/e0af3abe-e7b3-4576-9a3d-22
99568d8cab/volumes/kubernetes.io~projected/kube-api-access-fkbsh\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-7c65d6cfc9-m77lx","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e0af3abe-e7b3-4576-9a3d-2299568d8cab","kubernetes.io/config.seen":"2024-09-16T14:32:17.210438203Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"64bdab205af8a70147b07d876302fb1a864538ea4f2c0179a05278f227f3e2df","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/64bdab205af8a70147b07d876302fb1a864538ea4f2c0179a05278f227f3e2df/userdata","rootfs":"/var/lib/containers/storage/overlay/94af37e69041890a0af3a41e1579983f8d440c730a61f41fd033b4f12474b132/merged","created":"2024-09-16T14:32:17.463462001Z","annotations":{"controller-revision-hash":"648b489c5b","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations"
:"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2024-09-16T14:32:17.093767795Z\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/besteffort/pod0f3aad94-7e60-4355-a810-a5b7df71c853","io.kubernetes.cri-o.ContainerID":"64bdab205af8a70147b07d876302fb1a864538ea4f2c0179a05278f227f3e2df","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-5ct5q_kube-system_0f3aad94-7e60-4355-a810-a5b7df71c853_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-09-16T14:32:17.403722964Z","io.kubernetes.cri-o.HostName":"pause-563108","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/64bdab205af8a70147b07d876302fb1a864538ea4f2c0179a05278f227f3e2df/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"kube-proxy-5ct5q","io.kubernetes.cri-o.Labels":"{\"pod-template-generation\":\"1\",\"k8s
-app\":\"kube-proxy\",\"controller-revision-hash\":\"648b489c5b\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"0f3aad94-7e60-4355-a810-a5b7df71c853\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-5ct5q\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-5ct5q_0f3aad94-7e60-4355-a810-a5b7df71c853/64bdab205af8a70147b07d876302fb1a864538ea4f2c0179a05278f227f3e2df.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-5ct5q\",\"uid\":\"0f3aad94-7e60-4355-a810-a5b7df71c853\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/94af37e69041890a0af3a41e1579983f8d440c730a61f41fd033b4f12474b132/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-5ct5q_kube-system_0f3aad94-7e60-4355-a810-a5b7df71c853_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLi
nuxResources":"{\"cpu_period\":100000,\"cpu_shares\":2,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/64bdab205af8a70147b07d876302fb1a864538ea4f2c0179a05278f227f3e2df/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"64bdab205af8a70147b07d876302fb1a864538ea4f2c0179a05278f227f3e2df","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-5ct5q_kube-system_0f3aad94-7e60-4355-a810-a5b7df71c853_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/64bdab205af8a70147b07d876302fb1a864538ea4f2c0179a05278f227f3e2df/userdata/shm","io.kubernetes.pod.name":"kube-proxy-5ct5q","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"0f3aad94-7e60-4355-a810-a5b7df71c853","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2024-09-16
T14:32:17.093767795Z","kubernetes.io/config.source":"api","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"89a416009cfb499db33ec114452a99a816f5c71588d1a6634392a269ff3d955f","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/89a416009cfb499db33ec114452a99a816f5c71588d1a6634392a269ff3d955f/userdata","rootfs":"/var/lib/containers/storage/overlay/6aa19af929f5eb01315f29d9f5fa6efa848556040ee61c73d8c42a40798bce03/merged","created":"2024-09-16T14:32:06.523408337Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-09-16T14:32:05.925363885Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"e7053505b0e47e4685489c8201324958\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.83.201:2379\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pode7053505b0e47e4685489c8201324958","io.kubernete
s.cri-o.ContainerID":"89a416009cfb499db33ec114452a99a816f5c71588d1a6634392a269ff3d955f","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-pause-563108_kube-system_e7053505b0e47e4685489c8201324958_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-09-16T14:32:06.409249575Z","io.kubernetes.cri-o.HostName":"pause-563108","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/89a416009cfb499db33ec114452a99a816f5c71588d1a6634392a269ff3d955f/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"etcd-pause-563108","io.kubernetes.cri-o.Labels":"{\"component\":\"etcd\",\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"e7053505b0e47e4685489c8201324958\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-pause-563108\"}","io.kuber
netes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-563108_e7053505b0e47e4685489c8201324958/89a416009cfb499db33ec114452a99a816f5c71588d1a6634392a269ff3d955f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-563108\",\"uid\":\"e7053505b0e47e4685489c8201324958\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6aa19af929f5eb01315f29d9f5fa6efa848556040ee61c73d8c42a40798bce03/merged","io.kubernetes.cri-o.Name":"k8s_etcd-pause-563108_kube-system_e7053505b0e47e4685489c8201324958_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/89a416009cfb499db33ec1
14452a99a816f5c71588d1a6634392a269ff3d955f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"89a416009cfb499db33ec114452a99a816f5c71588d1a6634392a269ff3d955f","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-563108_kube-system_e7053505b0e47e4685489c8201324958_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/89a416009cfb499db33ec114452a99a816f5c71588d1a6634392a269ff3d955f/userdata/shm","io.kubernetes.pod.name":"etcd-pause-563108","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"e7053505b0e47e4685489c8201324958","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.83.201:2379","kubernetes.io/config.hash":"e7053505b0e47e4685489c8201324958","kubernetes.io/config.seen":"2024-09-16T14:32:05.925363885Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8c43bb36bae3bfd7341dcc7b27fed09dec946c549d
0acda06fdc16825aaece90","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/8c43bb36bae3bfd7341dcc7b27fed09dec946c549d0acda06fdc16825aaece90/userdata","rootfs":"/var/lib/containers/storage/overlay/ebc37c41d57e7e6256a78f0e2a55a881b3c668fa4d9a1c0a3ba6c3309dfea87d/merged","created":"2024-09-16T14:32:06.802943576Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d1900d79","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d1900d79\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8c43bb36b
ae3bfd7341dcc7b27fed09dec946c549d0acda06fdc16825aaece90","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T14:32:06.627753242Z","io.kubernetes.cri-o.Image":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.31.1","io.kubernetes.cri-o.ImageRef":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-563108\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c9bc0998c6f0845cd046cf5d40389540\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-563108_c9bc0998c6f0845cd046cf5d40389540/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ebc37c41
d57e7e6256a78f0e2a55a881b3c668fa4d9a1c0a3ba6c3309dfea87d/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-563108_kube-system_c9bc0998c6f0845cd046cf5d40389540_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/086835dbe4ffa7acd3e324d53a76b813810ecacace85c45c879283d68a49e53d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"086835dbe4ffa7acd3e324d53a76b813810ecacace85c45c879283d68a49e53d","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-563108_kube-system_c9bc0998c6f0845cd046cf5d40389540_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c9bc0998c6f0845cd046cf5d40389540/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},
{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c9bc0998c6f0845cd046cf5d40389540/containers/kube-controller-manager/74c81e6f\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\
"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-563108","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c9bc0998c6f0845cd046cf5d40389540","kubernetes.io/config.hash":"c9bc0998c6f0845cd046cf5d40389540","kubernetes.io/config.seen":"2024-09-16T14:32:05.925368938Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cd31b4748a1d79114320dad810ea59142fff9a90f0596b350f6171e329c6f6de","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/cd31b4748a1d79114320dad810ea59142fff9a90f0596b350f6171e329c6f6de/userdata","rootfs":"/var/lib/containers/storage/overlay/43360198521a7b7bd651c124136782c9dbcd838b26f3ef68149b49a1a59cc32b/merged","created":"2024-09-16T14:32:17.586727757Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"159dcc59","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount"
:"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"159dcc59\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"cd31b4748a1d79114320dad810ea59142fff9a90f0596b350f6171e329c6f6de","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T14:32:17.533296812Z","io.kubernetes.cri-o.Image":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.31.1","io.kubernetes.cri-o.ImageRef":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes
.pod.name\":\"kube-proxy-5ct5q\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"0f3aad94-7e60-4355-a810-a5b7df71c853\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-5ct5q_0f3aad94-7e60-4355-a810-a5b7df71c853/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/43360198521a7b7bd651c124136782c9dbcd838b26f3ef68149b49a1a59cc32b/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-5ct5q_kube-system_0f3aad94-7e60-4355-a810-a5b7df71c853_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/64bdab205af8a70147b07d876302fb1a864538ea4f2c0179a05278f227f3e2df/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"64bdab205af8a70147b07d876302fb1a864538ea4f2c0179a05278f227f3e2df","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-5ct5q_kube-system_0f3aad94-7e60-4355-a810-a5b7df71c853_0","io.kuberne
tes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/0f3aad94-7e60-4355-a810-a5b7df71c853/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/0f3aad94-7e60-4355-a810-a5b7df71c853/containers/kube-proxy/c2160636\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/0f3aad94-7e60-4355-a810-a5b7df71c853/volumes/kubernetes.io~configmap/kube-proxy\"
,\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/0f3aad94-7e60-4355-a810-a5b7df71c853/volumes/kubernetes.io~projected/kube-api-access-6nhxm\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-5ct5q","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"0f3aad94-7e60-4355-a810-a5b7df71c853","kubernetes.io/config.seen":"2024-09-16T14:32:17.093767795Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"da157d37ff47e4eaa19eb376c19b1b7edfc937dcb27367c8c1e23017cfb72177","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/da157d37ff47e4eaa19eb376c19b1b7edfc937dcb27367c8c1e23017cfb72177/userdata","rootfs":"/var/lib/containers/storage/overlay/ed23902dd43c563cd262f60f668de4920fd35be9752ca17869a54a33112cc5c5/merged","created"
:"2024-09-16T14:32:06.528355505Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"ea2db81505f8d17f294f83af3e972c85\",\"kubernetes.io/config.seen\":\"2024-09-16T14:32:05.925370083Z\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/podea2db81505f8d17f294f83af3e972c85","io.kubernetes.cri-o.ContainerID":"da157d37ff47e4eaa19eb376c19b1b7edfc937dcb27367c8c1e23017cfb72177","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-pause-563108_kube-system_ea2db81505f8d17f294f83af3e972c85_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-09-16T14:32:06.397627365Z","io.kubernetes.cri-o.HostName":"pause-563108","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/da157d37ff47e4eaa19eb376c19b1b7edfc937dcb27367c8c1e23017cfb72177/userdata
/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"kube-scheduler-pause-563108","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-scheduler-pause-563108\",\"tier\":\"control-plane\",\"component\":\"kube-scheduler\",\"io.kubernetes.pod.uid\":\"ea2db81505f8d17f294f83af3e972c85\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-563108_ea2db81505f8d17f294f83af3e972c85/da157d37ff47e4eaa19eb376c19b1b7edfc937dcb27367c8c1e23017cfb72177.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-563108\",\"uid\":\"ea2db81505f8d17f294f83af3e972c85\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ed23902dd43c563cd262f60f668de4920fd35be9752ca17869a54a33112cc5c5/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-p
ause-563108_kube-system_ea2db81505f8d17f294f83af3e972c85_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/da157d37ff47e4eaa19eb376c19b1b7edfc937dcb27367c8c1e23017cfb72177/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"da157d37ff47e4eaa19eb376c19b1b7edfc937dcb27367c8c1e23017cfb72177","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-563108_kube-system_ea2db81505f8d17f294f83af3e972c85_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/da157d37ff47e4eaa19eb376c19b1b7edfc937dc
b27367c8c1e23017cfb72177/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-563108","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"ea2db81505f8d17f294f83af3e972c85","kubernetes.io/config.hash":"ea2db81505f8d17f294f83af3e972c85","kubernetes.io/config.seen":"2024-09-16T14:32:05.925370083Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"da3149c366d5973346a298fd370e412d721744b0aa9091d575a09bc8a9badcc6","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/da3149c366d5973346a298fd370e412d721744b0aa9091d575a09bc8a9badcc6/userdata","rootfs":"/var/lib/containers/storage/overlay/1639a0dadc155962d880fe53f015078b130b600f3cd8c0e2a0f526a4fcbaedec/merged","created":"2024-09-16T14:32:06.833222241Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"12faacf7","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.termina
tionMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"12faacf7\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"da3149c366d5973346a298fd370e412d721744b0aa9091d575a09bc8a9badcc6","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T14:32:06.674374039Z","io.kubernetes.cri-o.Image":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.31.1","io.kubernetes.cri-o.ImageRef":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-
pause-563108\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"ea2db81505f8d17f294f83af3e972c85\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-563108_ea2db81505f8d17f294f83af3e972c85/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1639a0dadc155962d880fe53f015078b130b600f3cd8c0e2a0f526a4fcbaedec/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-563108_kube-system_ea2db81505f8d17f294f83af3e972c85_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/da157d37ff47e4eaa19eb376c19b1b7edfc937dcb27367c8c1e23017cfb72177/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"da157d37ff47e4eaa19eb376c19b1b7edfc937dcb27367c8c1e23017cfb72177","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-563108_kube-system_ea2db81505f8d17f294f83af3e972c85_0",
"io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ea2db81505f8d17f294f83af3e972c85/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ea2db81505f8d17f294f83af3e972c85/containers/kube-scheduler/4b87b3cf\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-563108","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"ea2db81505f8d17f294f83af3e972c85","kubernetes.io/config.hash":"ea2db81505f8d17f294f8
3af3e972c85","kubernetes.io/config.seen":"2024-09-16T14:32:05.925370083Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e1819e1408140023cb29c60674254071fa4f5e72cfd91b1a71074ff0976656c0","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/e1819e1408140023cb29c60674254071fa4f5e72cfd91b1a71074ff0976656c0/userdata","rootfs":"/var/lib/containers/storage/overlay/1431142cc2f811dac42d5397f9939b2ae8d86cb8ec0f877454660656e5e82fe9/merged","created":"2024-09-16T14:32:19.168366247Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-09-16T14:32:17.210438203Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"1.0.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"ee:1f:7d:44:f9:23\"},{\"name\":\"veth1f650448\",\"mac\":\"e6:a9:5a:9b:47:7c\"},{\"name\":\"eth0\",\"mac\":\"4e:39:3b:fe:ae:4f\",\"sandbox\":\"/var/ru
n/netns/6efb71a3-ef2b-4ce5-8898-e588db7359f3\"}],\"ips\":[{\"interface\":2,\"address\":\"10.244.0.2/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244.0.1\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pode0af3abe-e7b3-4576-9a3d-2299568d8cab","io.kubernetes.cri-o.ContainerID":"e1819e1408140023cb29c60674254071fa4f5e72cfd91b1a71074ff0976656c0","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-7c65d6cfc9-m77lx_kube-system_e0af3abe-e7b3-4576-9a3d-2299568d8cab_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-09-16T14:32:19.016540172Z","io.kubernetes.cri-o.HostName":"coredns-7c65d6cfc9-m77lx","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/e1819e1408140023cb29c60674254071fa4f5e72cfd91b1a71074ff0976656c0/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.
kubernetes.cri-o.KubeName":"coredns-7c65d6cfc9-m77lx","io.kubernetes.cri-o.Labels":"{\"k8s-app\":\"kube-dns\",\"io.kubernetes.pod.uid\":\"e0af3abe-e7b3-4576-9a3d-2299568d8cab\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-7c65d6cfc9-m77lx\",\"pod-template-hash\":\"7c65d6cfc9\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-7c65d6cfc9-m77lx_e0af3abe-e7b3-4576-9a3d-2299568d8cab/e1819e1408140023cb29c60674254071fa4f5e72cfd91b1a71074ff0976656c0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-7c65d6cfc9-m77lx\",\"uid\":\"e0af3abe-e7b3-4576-9a3d-2299568d8cab\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1431142cc2f811dac42d5397f9939b2ae8d86cb8ec0f877454660656e5e82fe9/merged","io.kubernetes.cri-o.Name":"k8s_coredns-7c65d6cfc9-m77lx_kube-system_e0af3abe-e7b3-4576-9a3d-2299568d8cab_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.Nam
espaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"memory_limit_in_bytes\":178257920,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e1819e1408140023cb29c60674254071fa4f5e72cfd91b1a71074ff0976656c0/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"e1819e1408140023cb29c60674254071fa4f5e72cfd91b1a71074ff0976656c0","io.kubernetes.cri-o.SandboxName":"k8s_coredns-7c65d6cfc9-m77lx_kube-system_e0af3abe-e7b3-4576-9a3d-2299568d8cab_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/e1819e1408140023cb29c60674254071fa4f5e72cfd91b1a71074ff0976656c0/userdata/shm","io.kubernetes.pod.name":"coredns-7c65d6cfc9-m77lx","io.kuberne
tes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"e0af3abe-e7b3-4576-9a3d-2299568d8cab","k8s-app":"kube-dns","kubernetes.io/config.seen":"2024-09-16T14:32:17.210438203Z","kubernetes.io/config.source":"api","pod-template-hash":"7c65d6cfc9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ebb62b7f5e7b4c1ee98cd91d85f0a9e9ae539930d9feb388df8678c79de3b0e8","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/ebb62b7f5e7b4c1ee98cd91d85f0a9e9ae539930d9feb388df8678c79de3b0e8/userdata","rootfs":"/var/lib/containers/storage/overlay/9941b01080a5a394e42a638ed71f5a0a4275029f431da84db5f10228daf3f10a/merged","created":"2024-09-16T14:32:06.889744334Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"cdf7d3fa","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kuber
netes.container.hash\":\"cdf7d3fa\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ebb62b7f5e7b4c1ee98cd91d85f0a9e9ae539930d9feb388df8678c79de3b0e8","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T14:32:06.718959417Z","io.kubernetes.cri-o.Image":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.15-0","io.kubernetes.cri-o.ImageRef":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-563108\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"e7053505b0e47e4685489c8201324958\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/
kube-system_etcd-pause-563108_e7053505b0e47e4685489c8201324958/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9941b01080a5a394e42a638ed71f5a0a4275029f431da84db5f10228daf3f10a/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-563108_kube-system_e7053505b0e47e4685489c8201324958_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/89a416009cfb499db33ec114452a99a816f5c71588d1a6634392a269ff3d955f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"89a416009cfb499db33ec114452a99a816f5c71588d1a6634392a269ff3d955f","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-563108_kube-system_e7053505b0e47e4685489c8201324958_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\
",\"host_path\":\"/var/lib/kubelet/pods/e7053505b0e47e4685489c8201324958/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e7053505b0e47e4685489c8201324958/containers/etcd/c7a10e6c\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-pause-563108","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e7053505b0e47e4685489c8201324958","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.83.201:2379","kubernetes.io/config.hash":"e7053505b0e47e4685489c8201324958","kubernete
s.io/config.seen":"2024-09-16T14:32:05.925363885Z","kubernetes.io/config.source":"file"},"owner":"root"}]
	I0916 14:33:03.584397  764229 cri.go:126] list returned 12 containers
	I0916 14:33:03.584410  764229 cri.go:129] container: {ID:086835dbe4ffa7acd3e324d53a76b813810ecacace85c45c879283d68a49e53d Status:stopped}
	I0916 14:33:03.584425  764229 cri.go:131] skipping 086835dbe4ffa7acd3e324d53a76b813810ecacace85c45c879283d68a49e53d - not in ps
	I0916 14:33:03.584430  764229 cri.go:129] container: {ID:3089feae8ecf2b925e5827a983491ce5e2f1e229bb002fd949fe3d41a368196c Status:stopped}
	I0916 14:33:03.584435  764229 cri.go:135] skipping {3089feae8ecf2b925e5827a983491ce5e2f1e229bb002fd949fe3d41a368196c stopped}: state = "stopped", want "paused"
	I0916 14:33:03.584449  764229 cri.go:129] container: {ID:37fb8197d1ad80a01c7f926271bad00904e8b068430647cf339070ce16c7c56e Status:stopped}
	I0916 14:33:03.584456  764229 cri.go:131] skipping 37fb8197d1ad80a01c7f926271bad00904e8b068430647cf339070ce16c7c56e - not in ps
	I0916 14:33:03.584460  764229 cri.go:129] container: {ID:4d20a78c07ee7880fb1a48f3de83e906af3b08861155f4a0fffee82451989a13 Status:stopped}
	I0916 14:33:03.584467  764229 cri.go:135] skipping {4d20a78c07ee7880fb1a48f3de83e906af3b08861155f4a0fffee82451989a13 stopped}: state = "stopped", want "paused"
	I0916 14:33:03.584472  764229 cri.go:129] container: {ID:64bdab205af8a70147b07d876302fb1a864538ea4f2c0179a05278f227f3e2df Status:stopped}
	I0916 14:33:03.584479  764229 cri.go:131] skipping 64bdab205af8a70147b07d876302fb1a864538ea4f2c0179a05278f227f3e2df - not in ps
	I0916 14:33:03.584484  764229 cri.go:129] container: {ID:89a416009cfb499db33ec114452a99a816f5c71588d1a6634392a269ff3d955f Status:stopped}
	I0916 14:33:03.584490  764229 cri.go:131] skipping 89a416009cfb499db33ec114452a99a816f5c71588d1a6634392a269ff3d955f - not in ps
	I0916 14:33:03.584495  764229 cri.go:129] container: {ID:8c43bb36bae3bfd7341dcc7b27fed09dec946c549d0acda06fdc16825aaece90 Status:stopped}
	I0916 14:33:03.584502  764229 cri.go:135] skipping {8c43bb36bae3bfd7341dcc7b27fed09dec946c549d0acda06fdc16825aaece90 stopped}: state = "stopped", want "paused"
	I0916 14:33:03.584521  764229 cri.go:129] container: {ID:cd31b4748a1d79114320dad810ea59142fff9a90f0596b350f6171e329c6f6de Status:stopped}
	I0916 14:33:03.584533  764229 cri.go:135] skipping {cd31b4748a1d79114320dad810ea59142fff9a90f0596b350f6171e329c6f6de stopped}: state = "stopped", want "paused"
	I0916 14:33:03.584542  764229 cri.go:129] container: {ID:da157d37ff47e4eaa19eb376c19b1b7edfc937dcb27367c8c1e23017cfb72177 Status:stopped}
	I0916 14:33:03.584550  764229 cri.go:131] skipping da157d37ff47e4eaa19eb376c19b1b7edfc937dcb27367c8c1e23017cfb72177 - not in ps
	I0916 14:33:03.584555  764229 cri.go:129] container: {ID:da3149c366d5973346a298fd370e412d721744b0aa9091d575a09bc8a9badcc6 Status:stopped}
	I0916 14:33:03.584563  764229 cri.go:135] skipping {da3149c366d5973346a298fd370e412d721744b0aa9091d575a09bc8a9badcc6 stopped}: state = "stopped", want "paused"
	I0916 14:33:03.584571  764229 cri.go:129] container: {ID:e1819e1408140023cb29c60674254071fa4f5e72cfd91b1a71074ff0976656c0 Status:stopped}
	I0916 14:33:03.584580  764229 cri.go:131] skipping e1819e1408140023cb29c60674254071fa4f5e72cfd91b1a71074ff0976656c0 - not in ps
	I0916 14:33:03.584589  764229 cri.go:129] container: {ID:ebb62b7f5e7b4c1ee98cd91d85f0a9e9ae539930d9feb388df8678c79de3b0e8 Status:stopped}
	I0916 14:33:03.584605  764229 cri.go:135] skipping {ebb62b7f5e7b4c1ee98cd91d85f0a9e9ae539930d9feb388df8678c79de3b0e8 stopped}: state = "stopped", want "paused"
	I0916 14:33:03.584663  764229 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 14:33:03.596855  764229 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 14:33:03.596876  764229 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 14:33:03.596943  764229 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 14:33:03.609167  764229 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 14:33:03.610352  764229 kubeconfig.go:125] found "pause-563108" server: "https://192.168.83.201:8443"
	I0916 14:33:03.612093  764229 kapi.go:59] client config for pause-563108: &rest.Config{Host:"https://192.168.83.201:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19652-713072/.minikube/profiles/pause-563108/client.crt", KeyFile:"/home/jenkins/minikube-integration/19652-713072/.minikube/profiles/pause-563108/client.key", CAFile:"/home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[
]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 14:33:03.612946  764229 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 14:33:03.626618  764229 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.83.201
	I0916 14:33:03.626666  764229 kubeadm.go:1160] stopping kube-system containers ...
	I0916 14:33:03.626684  764229 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0916 14:33:03.626768  764229 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 14:33:03.673294  764229 cri.go:89] found id: "4d20a78c07ee7880fb1a48f3de83e906af3b08861155f4a0fffee82451989a13"
	I0916 14:33:03.673313  764229 cri.go:89] found id: "cd31b4748a1d79114320dad810ea59142fff9a90f0596b350f6171e329c6f6de"
	I0916 14:33:03.673317  764229 cri.go:89] found id: "ebb62b7f5e7b4c1ee98cd91d85f0a9e9ae539930d9feb388df8678c79de3b0e8"
	I0916 14:33:03.673320  764229 cri.go:89] found id: "da3149c366d5973346a298fd370e412d721744b0aa9091d575a09bc8a9badcc6"
	I0916 14:33:03.673322  764229 cri.go:89] found id: "8c43bb36bae3bfd7341dcc7b27fed09dec946c549d0acda06fdc16825aaece90"
	I0916 14:33:03.673325  764229 cri.go:89] found id: "3089feae8ecf2b925e5827a983491ce5e2f1e229bb002fd949fe3d41a368196c"
	I0916 14:33:03.673327  764229 cri.go:89] found id: ""
	I0916 14:33:03.673333  764229 cri.go:252] Stopping containers: [4d20a78c07ee7880fb1a48f3de83e906af3b08861155f4a0fffee82451989a13 cd31b4748a1d79114320dad810ea59142fff9a90f0596b350f6171e329c6f6de ebb62b7f5e7b4c1ee98cd91d85f0a9e9ae539930d9feb388df8678c79de3b0e8 da3149c366d5973346a298fd370e412d721744b0aa9091d575a09bc8a9badcc6 8c43bb36bae3bfd7341dcc7b27fed09dec946c549d0acda06fdc16825aaece90 3089feae8ecf2b925e5827a983491ce5e2f1e229bb002fd949fe3d41a368196c]
	I0916 14:33:03.673382  764229 ssh_runner.go:195] Run: which crictl
	I0916 14:33:03.677763  764229 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 4d20a78c07ee7880fb1a48f3de83e906af3b08861155f4a0fffee82451989a13 cd31b4748a1d79114320dad810ea59142fff9a90f0596b350f6171e329c6f6de ebb62b7f5e7b4c1ee98cd91d85f0a9e9ae539930d9feb388df8678c79de3b0e8 da3149c366d5973346a298fd370e412d721744b0aa9091d575a09bc8a9badcc6 8c43bb36bae3bfd7341dcc7b27fed09dec946c549d0acda06fdc16825aaece90 3089feae8ecf2b925e5827a983491ce5e2f1e229bb002fd949fe3d41a368196c
	I0916 14:33:03.758975  764229 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0916 14:33:03.810682  764229 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 14:33:03.826197  764229 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Sep 16 14:32 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Sep 16 14:32 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Sep 16 14:32 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Sep 16 14:32 /etc/kubernetes/scheduler.conf
	
	I0916 14:33:03.826276  764229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 14:33:03.842022  764229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 14:33:03.853265  764229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 14:33:03.863508  764229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0916 14:33:03.863573  764229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 14:33:03.878842  764229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 14:33:03.206400  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) Calling .GetIP
	I0916 14:33:03.209652  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:33:03.210122  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:1d:fe", ip: ""} in network mk-kubernetes-upgrade-515632: {Iface:virbr1 ExpiryTime:2024-09-16 15:32:15 +0000 UTC Type:0 Mac:52:54:00:18:1d:fe Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:kubernetes-upgrade-515632 Clientid:01:52:54:00:18:1d:fe}
	I0916 14:33:03.210158  764454 main.go:141] libmachine: (kubernetes-upgrade-515632) DBG | domain kubernetes-upgrade-515632 has defined IP address 192.168.39.161 and MAC address 52:54:00:18:1d:fe in network mk-kubernetes-upgrade-515632
	I0916 14:33:03.210410  764454 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 14:33:03.215380  764454 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-515632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:kubernetes-upgrade-515632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 14:33:03.215522  764454 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 14:33:03.215584  764454 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 14:33:03.265523  764454 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 14:33:03.265548  764454 crio.go:433] Images already preloaded, skipping extraction
	I0916 14:33:03.265615  764454 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 14:33:03.311596  764454 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 14:33:03.311618  764454 cache_images.go:84] Images are preloaded, skipping loading
	I0916 14:33:03.311626  764454 kubeadm.go:934] updating node { 192.168.39.161 8443 v1.31.1 crio true true} ...
	I0916 14:33:03.311769  764454 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-515632 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-515632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 14:33:03.311862  764454 ssh_runner.go:195] Run: crio config
	I0916 14:33:03.380401  764454 cni.go:84] Creating CNI manager for ""
	I0916 14:33:03.380425  764454 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 14:33:03.380434  764454 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 14:33:03.380458  764454 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.161 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-515632 NodeName:kubernetes-upgrade-515632 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 14:33:03.380686  764454 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-515632"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.161
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.161"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 14:33:03.380761  764454 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 14:33:03.392467  764454 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 14:33:03.392541  764454 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 14:33:03.406217  764454 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0916 14:33:03.428290  764454 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 14:33:03.449266  764454 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0916 14:33:03.471168  764454 ssh_runner.go:195] Run: grep 192.168.39.161	control-plane.minikube.internal$ /etc/hosts
	I0916 14:33:03.475910  764454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 14:33:03.646982  764454 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 14:33:03.666427  764454 certs.go:68] Setting up /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632 for IP: 192.168.39.161
	I0916 14:33:03.666449  764454 certs.go:194] generating shared ca certs ...
	I0916 14:33:03.666470  764454 certs.go:226] acquiring lock for ca certs: {Name:mk25b35916ff3ff3777938e3e2b7794965f8a707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 14:33:03.666732  764454 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key
	I0916 14:33:03.666803  764454 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key
	I0916 14:33:03.666818  764454 certs.go:256] generating profile certs ...
	I0916 14:33:03.666929  764454 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/client.key
	I0916 14:33:03.666990  764454 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/apiserver.key.0d786eb0
	I0916 14:33:03.667043  764454 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/proxy-client.key
	I0916 14:33:03.667194  764454 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem (1338 bytes)
	W0916 14:33:03.667239  764454 certs.go:480] ignoring /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544_empty.pem, impossibly tiny 0 bytes
	I0916 14:33:03.667252  764454 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 14:33:03.667294  764454 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/ca.pem (1082 bytes)
	I0916 14:33:03.667325  764454 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/cert.pem (1123 bytes)
	I0916 14:33:03.667378  764454 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/certs/key.pem (1679 bytes)
	I0916 14:33:03.667438  764454 certs.go:484] found cert: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem (1708 bytes)
	I0916 14:33:03.668286  764454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 14:33:03.698364  764454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 14:33:03.726027  764454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 14:33:03.754521  764454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 14:33:03.785023  764454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 14:33:03.812375  764454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 14:33:03.921341  764454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 14:33:04.024555  764454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/kubernetes-upgrade-515632/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 14:33:04.200901  764454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/certs/720544.pem --> /usr/share/ca-certificates/720544.pem (1338 bytes)
	I0916 14:33:04.376635  764454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/ssl/certs/7205442.pem --> /usr/share/ca-certificates/7205442.pem (1708 bytes)
	I0916 14:33:04.519963  764454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19652-713072/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 14:33:04.724878  764454 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 14:33:04.809775  764454 ssh_runner.go:195] Run: openssl version
	I0916 14:33:04.864367  764454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/720544.pem && ln -fs /usr/share/ca-certificates/720544.pem /etc/ssl/certs/720544.pem"
	I0916 14:33:04.956693  764454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/720544.pem
	I0916 14:33:04.995058  764454 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 13:33 /usr/share/ca-certificates/720544.pem
	I0916 14:33:04.995134  764454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/720544.pem
	I0916 14:33:05.036430  764454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/720544.pem /etc/ssl/certs/51391683.0"
	I0916 14:33:02.586370  764780 main.go:141] libmachine: (NoKubernetes-772968) Waiting to get IP...
	I0916 14:33:02.587096  764780 main.go:141] libmachine: (NoKubernetes-772968) DBG | domain NoKubernetes-772968 has defined MAC address 52:54:00:a3:2b:59 in network mk-NoKubernetes-772968
	I0916 14:33:02.587608  764780 main.go:141] libmachine: (NoKubernetes-772968) DBG | unable to find current IP address of domain NoKubernetes-772968 in network mk-NoKubernetes-772968
	I0916 14:33:02.587672  764780 main.go:141] libmachine: (NoKubernetes-772968) DBG | I0916 14:33:02.587587  764856 retry.go:31] will retry after 191.210922ms: waiting for machine to come up
	I0916 14:33:02.780413  764780 main.go:141] libmachine: (NoKubernetes-772968) DBG | domain NoKubernetes-772968 has defined MAC address 52:54:00:a3:2b:59 in network mk-NoKubernetes-772968
	I0916 14:33:02.780991  764780 main.go:141] libmachine: (NoKubernetes-772968) DBG | unable to find current IP address of domain NoKubernetes-772968 in network mk-NoKubernetes-772968
	I0916 14:33:02.781007  764780 main.go:141] libmachine: (NoKubernetes-772968) DBG | I0916 14:33:02.780929  764856 retry.go:31] will retry after 301.724297ms: waiting for machine to come up
	I0916 14:33:03.084636  764780 main.go:141] libmachine: (NoKubernetes-772968) DBG | domain NoKubernetes-772968 has defined MAC address 52:54:00:a3:2b:59 in network mk-NoKubernetes-772968
	I0916 14:33:03.085099  764780 main.go:141] libmachine: (NoKubernetes-772968) DBG | unable to find current IP address of domain NoKubernetes-772968 in network mk-NoKubernetes-772968
	I0916 14:33:03.085132  764780 main.go:141] libmachine: (NoKubernetes-772968) DBG | I0916 14:33:03.085059  764856 retry.go:31] will retry after 392.869202ms: waiting for machine to come up
	I0916 14:33:03.479730  764780 main.go:141] libmachine: (NoKubernetes-772968) DBG | domain NoKubernetes-772968 has defined MAC address 52:54:00:a3:2b:59 in network mk-NoKubernetes-772968
	I0916 14:33:03.480316  764780 main.go:141] libmachine: (NoKubernetes-772968) DBG | unable to find current IP address of domain NoKubernetes-772968 in network mk-NoKubernetes-772968
	I0916 14:33:03.480337  764780 main.go:141] libmachine: (NoKubernetes-772968) DBG | I0916 14:33:03.480275  764856 retry.go:31] will retry after 466.74842ms: waiting for machine to come up
	I0916 14:33:03.949289  764780 main.go:141] libmachine: (NoKubernetes-772968) DBG | domain NoKubernetes-772968 has defined MAC address 52:54:00:a3:2b:59 in network mk-NoKubernetes-772968
	I0916 14:33:03.949968  764780 main.go:141] libmachine: (NoKubernetes-772968) DBG | unable to find current IP address of domain NoKubernetes-772968 in network mk-NoKubernetes-772968
	I0916 14:33:03.949991  764780 main.go:141] libmachine: (NoKubernetes-772968) DBG | I0916 14:33:03.949900  764856 retry.go:31] will retry after 498.344619ms: waiting for machine to come up
	I0916 14:33:04.450434  764780 main.go:141] libmachine: (NoKubernetes-772968) DBG | domain NoKubernetes-772968 has defined MAC address 52:54:00:a3:2b:59 in network mk-NoKubernetes-772968
	I0916 14:33:04.450977  764780 main.go:141] libmachine: (NoKubernetes-772968) DBG | unable to find current IP address of domain NoKubernetes-772968 in network mk-NoKubernetes-772968
	I0916 14:33:04.450991  764780 main.go:141] libmachine: (NoKubernetes-772968) DBG | I0916 14:33:04.450921  764856 retry.go:31] will retry after 737.273433ms: waiting for machine to come up
	I0916 14:33:05.190064  764780 main.go:141] libmachine: (NoKubernetes-772968) DBG | domain NoKubernetes-772968 has defined MAC address 52:54:00:a3:2b:59 in network mk-NoKubernetes-772968
	I0916 14:33:05.190666  764780 main.go:141] libmachine: (NoKubernetes-772968) DBG | unable to find current IP address of domain NoKubernetes-772968 in network mk-NoKubernetes-772968
	I0916 14:33:05.190709  764780 main.go:141] libmachine: (NoKubernetes-772968) DBG | I0916 14:33:05.190619  764856 retry.go:31] will retry after 1.024042814s: waiting for machine to come up
	I0916 14:33:06.217036  764780 main.go:141] libmachine: (NoKubernetes-772968) DBG | domain NoKubernetes-772968 has defined MAC address 52:54:00:a3:2b:59 in network mk-NoKubernetes-772968
	I0916 14:33:06.217642  764780 main.go:141] libmachine: (NoKubernetes-772968) DBG | unable to find current IP address of domain NoKubernetes-772968 in network mk-NoKubernetes-772968
	I0916 14:33:06.217660  764780 main.go:141] libmachine: (NoKubernetes-772968) DBG | I0916 14:33:06.217586  764856 retry.go:31] will retry after 1.399175008s: waiting for machine to come up
	I0916 14:33:03.895583  764229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0916 14:33:03.895660  764229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 14:33:03.911349  764229 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 14:33:03.922645  764229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 14:33:03.992786  764229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 14:33:05.254396  764229 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.261573273s)
	I0916 14:33:05.254435  764229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0916 14:33:05.525248  764229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 14:33:05.645282  764229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0916 14:33:05.778902  764229 api_server.go:52] waiting for apiserver process to appear ...
	I0916 14:33:05.778985  764229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 14:33:06.279245  764229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 14:33:06.779300  764229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 14:33:06.807844  764229 api_server.go:72] duration metric: took 1.028936083s to wait for apiserver process to appear ...
	I0916 14:33:06.807875  764229 api_server.go:88] waiting for apiserver healthz status ...
	I0916 14:33:06.807903  764229 api_server.go:253] Checking apiserver healthz at https://192.168.83.201:8443/healthz ...
	I0916 14:33:05.062865  764454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7205442.pem && ln -fs /usr/share/ca-certificates/7205442.pem /etc/ssl/certs/7205442.pem"
	I0916 14:33:05.153203  764454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7205442.pem
	I0916 14:33:05.159034  764454 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 13:33 /usr/share/ca-certificates/7205442.pem
	I0916 14:33:05.159112  764454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7205442.pem
	I0916 14:33:05.171039  764454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7205442.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 14:33:05.184746  764454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 14:33:05.202953  764454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 14:33:05.214763  764454 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 12:53 /usr/share/ca-certificates/minikubeCA.pem
	I0916 14:33:05.214830  764454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 14:33:05.227378  764454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 14:33:05.241292  764454 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 14:33:05.247194  764454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 14:33:05.258665  764454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 14:33:05.265176  764454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 14:33:05.272414  764454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 14:33:05.280357  764454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 14:33:05.288702  764454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 14:33:05.295538  764454 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-515632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.1 ClusterName:kubernetes-upgrade-515632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 14:33:05.295659  764454 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 14:33:05.295722  764454 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 14:33:05.421423  764454 cri.go:89] found id: "66baf4a54c7adfd5ee25baa30b92a839a533167eb2ba8fb0c49730301f50fae0"
	I0916 14:33:05.421448  764454 cri.go:89] found id: "28e3edc32c2d4b0a70193f3cc57cef336f8d9b81c795682e4f720845f4ab692d"
	I0916 14:33:05.421466  764454 cri.go:89] found id: "c13f28968b3e43b85e1c87888e1bc3d61cd0513297b4f87bd82181602f9b4223"
	I0916 14:33:05.421472  764454 cri.go:89] found id: "1b102ccc084d032a58b46362e264006ee23f2ef3321eebcd1e675ed2af3cdba6"
	I0916 14:33:05.421476  764454 cri.go:89] found id: "ea50730f8da8bed225ece8a7a0e04a629bb54d15e60063c19849a70999c3bbad"
	I0916 14:33:05.421481  764454 cri.go:89] found id: "17ca7ce6689b0852815eb01782354f259210f207e4d4d6aadae278893350aa76"
	I0916 14:33:05.421485  764454 cri.go:89] found id: "e655f72e7bcbc67b60b8dca3cc6ed0bf778d8777de835dc7688c00fd5eeda2ca"
	I0916 14:33:05.421489  764454 cri.go:89] found id: "7efef14d3261ab89e1e2abef06f7973f94e9917b1b38ee5b1e6337821c67088f"
	I0916 14:33:05.421493  764454 cri.go:89] found id: "77722c73b0ddc0fea2bd6b1ea4adf71e2e4509a27f355a91127dbada8511a509"
	I0916 14:33:05.421504  764454 cri.go:89] found id: "ea6be3fca55408bd4e052d7d8a10c31d191ae9a4881cbbd68f760fc3edcdbf48"
	I0916 14:33:05.421508  764454 cri.go:89] found id: "74d6faa7bc68a3aae8ebc5d64e90ff1dfa277041271bf28c38f0fd1fcc2ca2f2"
	I0916 14:33:05.421512  764454 cri.go:89] found id: "0a0836d214469cdace3fd8f54b0ea39f4cd1f08a48b593830e163c57c60d858e"
	I0916 14:33:05.421516  764454 cri.go:89] found id: ""
	I0916 14:33:05.421589  764454 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 16 14:33:28 kubernetes-upgrade-515632 crio[2278]: time="2024-09-16 14:33:28.073600503Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726497208073578570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=230824e6-c7da-454a-a6e9-d9b8fa562cec name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:33:28 kubernetes-upgrade-515632 crio[2278]: time="2024-09-16 14:33:28.074384611Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=667e989c-b938-4124-b0e2-df3041d37937 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:33:28 kubernetes-upgrade-515632 crio[2278]: time="2024-09-16 14:33:28.074474178Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=667e989c-b938-4124-b0e2-df3041d37937 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:33:28 kubernetes-upgrade-515632 crio[2278]: time="2024-09-16 14:33:28.075055744Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dcffeff0ecf2111bfb19f3f5c9b593c284285bc5eeb500782516bf9d03860eef,PodSandboxId:e5c40afae20e9b542c32cbeba2cecc7c3a547dae189233b33bd0607f7f6229a9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726497204649963035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 654c784a-51c1-4379-9c13-6e5f5f94d520,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:164ba6994824a10a446fa807802537be4b013773ba790b0c688c85d09c8736eb,PodSandboxId:d2569a885d86e25932d9e81d00d508dc3ec3c1a7fae081d1774a71b30e69daac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726497204649347498,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w96ml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc7d7c7-ff39-4003-820c-6a1577c86392,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c3d5150a928ebea87c64782fa16495c8bf775fef586390ae6e62ac221a7ee70,PodSandboxId:c7d733a3d418539f96154c804187efd10690bb7eb0a5cce4adc932b42fe12800,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726497204634790547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fz5br,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfc8be18-d6f1-41f9-b28d-db2229de5783,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913bacca919c012aa631de3e03e968fc3dbc59dd4dab46f669aae302938814bf,PodSandboxId:ec4c9952367214b6e9b7b906c6c6574d27f11ba480b168f921f492160098481e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726497200834930476,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: bb720b25351bf5f0f0a6d9de88f88cd7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28ec57c81e3f300f7ad8119afd23af7a4234f0c398a0300b943e5afa03d23c41,PodSandboxId:4e70e1e1ec7f559ddb6fe418327774df23871312584b9af133362ef592cb069e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726497200843375904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
69d961d405d66d0d5eb65d253c12f85,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66196a627280ce13f74c1b5287fe544ec5ab765c940b629c117ea8280593efcc,PodSandboxId:54b05f324cc11319f86493ff9f632744ddbd7c1d5da86539cfda19751a9dddc8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726497200852194384,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 079d63d887f55e0943ff18a08f6f3fb2,},A
nnotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204d897b54473bcce97d00508d7da7885d5d8e1da3fb173b851e0823fe1f8ba9,PodSandboxId:fb66ebc3d8a4ed790dcf7e3b72640f33615a3846f9e08b83bb02d3950a6d712d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726497200826275448,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6b418ecf8faa0
82736d14e5f9e442f2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea8deaf9faf8e9a4bb298779c8209b85a8791c76fc58156b8784e318ce3e9775,PodSandboxId:e5c40afae20e9b542c32cbeba2cecc7c3a547dae189233b33bd0607f7f6229a9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726497184715262694,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 654c784a-51c1-4379-9c13-6e5f5f9
4d520,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7540ab9ce3c8e72c560677e75f63b970a8dda6f27b387f22b39aae4479aa1c0,PodSandboxId:389bf46a882332456c51495db7c9f019460df0eb19d8d56189601cc2e5f0ea14,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726497185658890382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jgkxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e12c871-fd38-4ae1-a698-a9160206501e,},Annotations:map[string]
string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcaf55077af24c1cbcbc9f05f03026758fd21c5b54c359b2b58f5bc17dcf09c3,PodSandboxId:c7d733a3d418539f96154c804187efd10690bb7eb0a5cce4adc932b42fe12800,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726497185413156821,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fz5br,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfc8be18-d6f1-41f9-b28d-db2229de5783,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3253ee2431d01efa538d1dbed09f07fc9692d00358260f1a386383370df1444b,PodSandboxId:d2569a885d86e25932d9e81d00d508dc3ec3c1a7fae081d1774a71b30e69daac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726497184561404826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w96ml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc7d7c7-ff39-4003-820c-6a1577c86392,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66baf4a54c7adfd5ee25baa30b92a839a533167eb2ba8fb0c49730301f50fae0,PodSandboxId:ec4c9952367214b6e9b7b906c6c6574d27f11ba480b168f921f492160098481e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726497184465344070,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb720b25351bf5f0f0a6d9de88f88cd7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e3edc32c2d4b0a70193f3cc57cef336f8d9b81c795682e4f720845f4ab692d,PodSandboxId:54b05f324cc11319f86493ff9f632744ddbd7c1d5da86539cfda19751a9dddc8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc
06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726497184453234312,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 079d63d887f55e0943ff18a08f6f3fb2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c13f28968b3e43b85e1c87888e1bc3d61cd0513297b4f87bd82181602f9b4223,PodSandboxId:fb66ebc3d8a4ed790dcf7e3b72640f33615a3846f9e08b83bb02d3950a6d712d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b5526
0db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726497184446621296,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6b418ecf8faa082736d14e5f9e442f2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b102ccc084d032a58b46362e264006ee23f2ef3321eebcd1e675ed2af3cdba6,PodSandboxId:4e70e1e1ec7f559ddb6fe418327774df23871312584b9af133362ef592cb069e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fd
cc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726497184242473589,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 269d961d405d66d0d5eb65d253c12f85,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea50730f8da8bed225ece8a7a0e04a629bb54d15e60063c19849a70999c3bbad,PodSandboxId:fc8cc420f9970909ed609361f257bb2b373011712e2f7fcf4a94141d980c323d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726497163668222908,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jgkxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e12c871-fd38-4ae1-a698-a9160206501e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=667e989c-b938-4124-b0e2-df3041d37937 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:33:28 kubernetes-upgrade-515632 crio[2278]: time="2024-09-16 14:33:28.152245693Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0d169a24-c0f0-4ddf-a606-0699ad56169a name=/runtime.v1.RuntimeService/Version
	Sep 16 14:33:28 kubernetes-upgrade-515632 crio[2278]: time="2024-09-16 14:33:28.152379371Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d169a24-c0f0-4ddf-a606-0699ad56169a name=/runtime.v1.RuntimeService/Version
	Sep 16 14:33:28 kubernetes-upgrade-515632 crio[2278]: time="2024-09-16 14:33:28.154212655Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=14abf2c0-f677-4706-b5d1-bdf324d55b95 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:33:28 kubernetes-upgrade-515632 crio[2278]: time="2024-09-16 14:33:28.154795195Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726497208154762498,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=14abf2c0-f677-4706-b5d1-bdf324d55b95 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:33:28 kubernetes-upgrade-515632 crio[2278]: time="2024-09-16 14:33:28.155471327Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d44027ee-ddc1-4f54-baa2-1d36cc7afc54 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:33:28 kubernetes-upgrade-515632 crio[2278]: time="2024-09-16 14:33:28.155541759Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d44027ee-ddc1-4f54-baa2-1d36cc7afc54 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:33:28 kubernetes-upgrade-515632 crio[2278]: time="2024-09-16 14:33:28.156943826Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dcffeff0ecf2111bfb19f3f5c9b593c284285bc5eeb500782516bf9d03860eef,PodSandboxId:e5c40afae20e9b542c32cbeba2cecc7c3a547dae189233b33bd0607f7f6229a9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726497204649963035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 654c784a-51c1-4379-9c13-6e5f5f94d520,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:164ba6994824a10a446fa807802537be4b013773ba790b0c688c85d09c8736eb,PodSandboxId:d2569a885d86e25932d9e81d00d508dc3ec3c1a7fae081d1774a71b30e69daac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726497204649347498,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w96ml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc7d7c7-ff39-4003-820c-6a1577c86392,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c3d5150a928ebea87c64782fa16495c8bf775fef586390ae6e62ac221a7ee70,PodSandboxId:c7d733a3d418539f96154c804187efd10690bb7eb0a5cce4adc932b42fe12800,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726497204634790547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fz5br,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfc8be18-d6f1-41f9-b28d-db2229de5783,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913bacca919c012aa631de3e03e968fc3dbc59dd4dab46f669aae302938814bf,PodSandboxId:ec4c9952367214b6e9b7b906c6c6574d27f11ba480b168f921f492160098481e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726497200834930476,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: bb720b25351bf5f0f0a6d9de88f88cd7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28ec57c81e3f300f7ad8119afd23af7a4234f0c398a0300b943e5afa03d23c41,PodSandboxId:4e70e1e1ec7f559ddb6fe418327774df23871312584b9af133362ef592cb069e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726497200843375904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
69d961d405d66d0d5eb65d253c12f85,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66196a627280ce13f74c1b5287fe544ec5ab765c940b629c117ea8280593efcc,PodSandboxId:54b05f324cc11319f86493ff9f632744ddbd7c1d5da86539cfda19751a9dddc8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726497200852194384,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 079d63d887f55e0943ff18a08f6f3fb2,},A
nnotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204d897b54473bcce97d00508d7da7885d5d8e1da3fb173b851e0823fe1f8ba9,PodSandboxId:fb66ebc3d8a4ed790dcf7e3b72640f33615a3846f9e08b83bb02d3950a6d712d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726497200826275448,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6b418ecf8faa0
82736d14e5f9e442f2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea8deaf9faf8e9a4bb298779c8209b85a8791c76fc58156b8784e318ce3e9775,PodSandboxId:e5c40afae20e9b542c32cbeba2cecc7c3a547dae189233b33bd0607f7f6229a9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726497184715262694,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 654c784a-51c1-4379-9c13-6e5f5f9
4d520,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7540ab9ce3c8e72c560677e75f63b970a8dda6f27b387f22b39aae4479aa1c0,PodSandboxId:389bf46a882332456c51495db7c9f019460df0eb19d8d56189601cc2e5f0ea14,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726497185658890382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jgkxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e12c871-fd38-4ae1-a698-a9160206501e,},Annotations:map[string]
string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcaf55077af24c1cbcbc9f05f03026758fd21c5b54c359b2b58f5bc17dcf09c3,PodSandboxId:c7d733a3d418539f96154c804187efd10690bb7eb0a5cce4adc932b42fe12800,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726497185413156821,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fz5br,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfc8be18-d6f1-41f9-b28d-db2229de5783,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3253ee2431d01efa538d1dbed09f07fc9692d00358260f1a386383370df1444b,PodSandboxId:d2569a885d86e25932d9e81d00d508dc3ec3c1a7fae081d1774a71b30e69daac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726497184561404826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w96ml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc7d7c7-ff39-4003-820c-6a1577c86392,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66baf4a54c7adfd5ee25baa30b92a839a533167eb2ba8fb0c49730301f50fae0,PodSandboxId:ec4c9952367214b6e9b7b906c6c6574d27f11ba480b168f921f492160098481e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726497184465344070,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb720b25351bf5f0f0a6d9de88f88cd7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e3edc32c2d4b0a70193f3cc57cef336f8d9b81c795682e4f720845f4ab692d,PodSandboxId:54b05f324cc11319f86493ff9f632744ddbd7c1d5da86539cfda19751a9dddc8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc
06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726497184453234312,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 079d63d887f55e0943ff18a08f6f3fb2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c13f28968b3e43b85e1c87888e1bc3d61cd0513297b4f87bd82181602f9b4223,PodSandboxId:fb66ebc3d8a4ed790dcf7e3b72640f33615a3846f9e08b83bb02d3950a6d712d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b5526
0db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726497184446621296,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6b418ecf8faa082736d14e5f9e442f2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b102ccc084d032a58b46362e264006ee23f2ef3321eebcd1e675ed2af3cdba6,PodSandboxId:4e70e1e1ec7f559ddb6fe418327774df23871312584b9af133362ef592cb069e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fd
cc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726497184242473589,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 269d961d405d66d0d5eb65d253c12f85,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea50730f8da8bed225ece8a7a0e04a629bb54d15e60063c19849a70999c3bbad,PodSandboxId:fc8cc420f9970909ed609361f257bb2b373011712e2f7fcf4a94141d980c323d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726497163668222908,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jgkxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e12c871-fd38-4ae1-a698-a9160206501e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d44027ee-ddc1-4f54-baa2-1d36cc7afc54 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:33:28 kubernetes-upgrade-515632 crio[2278]: time="2024-09-16 14:33:28.222110459Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=81076a44-c496-46c3-b2da-9d964946bed7 name=/runtime.v1.RuntimeService/Version
	Sep 16 14:33:28 kubernetes-upgrade-515632 crio[2278]: time="2024-09-16 14:33:28.222211894Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=81076a44-c496-46c3-b2da-9d964946bed7 name=/runtime.v1.RuntimeService/Version
	Sep 16 14:33:28 kubernetes-upgrade-515632 crio[2278]: time="2024-09-16 14:33:28.223596080Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4197142b-1b9d-4c17-a049-239ace2e3270 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:33:28 kubernetes-upgrade-515632 crio[2278]: time="2024-09-16 14:33:28.224162555Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726497208224131079,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4197142b-1b9d-4c17-a049-239ace2e3270 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:33:28 kubernetes-upgrade-515632 crio[2278]: time="2024-09-16 14:33:28.224742507Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=09434fa7-557e-41f9-9a37-8f4bdde3f44c name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:33:28 kubernetes-upgrade-515632 crio[2278]: time="2024-09-16 14:33:28.224811764Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=09434fa7-557e-41f9-9a37-8f4bdde3f44c name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:33:28 kubernetes-upgrade-515632 crio[2278]: time="2024-09-16 14:33:28.225134577Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dcffeff0ecf2111bfb19f3f5c9b593c284285bc5eeb500782516bf9d03860eef,PodSandboxId:e5c40afae20e9b542c32cbeba2cecc7c3a547dae189233b33bd0607f7f6229a9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726497204649963035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 654c784a-51c1-4379-9c13-6e5f5f94d520,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:164ba6994824a10a446fa807802537be4b013773ba790b0c688c85d09c8736eb,PodSandboxId:d2569a885d86e25932d9e81d00d508dc3ec3c1a7fae081d1774a71b30e69daac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726497204649347498,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w96ml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc7d7c7-ff39-4003-820c-6a1577c86392,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c3d5150a928ebea87c64782fa16495c8bf775fef586390ae6e62ac221a7ee70,PodSandboxId:c7d733a3d418539f96154c804187efd10690bb7eb0a5cce4adc932b42fe12800,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726497204634790547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fz5br,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfc8be18-d6f1-41f9-b28d-db2229de5783,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913bacca919c012aa631de3e03e968fc3dbc59dd4dab46f669aae302938814bf,PodSandboxId:ec4c9952367214b6e9b7b906c6c6574d27f11ba480b168f921f492160098481e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726497200834930476,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: bb720b25351bf5f0f0a6d9de88f88cd7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28ec57c81e3f300f7ad8119afd23af7a4234f0c398a0300b943e5afa03d23c41,PodSandboxId:4e70e1e1ec7f559ddb6fe418327774df23871312584b9af133362ef592cb069e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726497200843375904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
69d961d405d66d0d5eb65d253c12f85,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66196a627280ce13f74c1b5287fe544ec5ab765c940b629c117ea8280593efcc,PodSandboxId:54b05f324cc11319f86493ff9f632744ddbd7c1d5da86539cfda19751a9dddc8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726497200852194384,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 079d63d887f55e0943ff18a08f6f3fb2,},A
nnotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204d897b54473bcce97d00508d7da7885d5d8e1da3fb173b851e0823fe1f8ba9,PodSandboxId:fb66ebc3d8a4ed790dcf7e3b72640f33615a3846f9e08b83bb02d3950a6d712d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726497200826275448,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6b418ecf8faa0
82736d14e5f9e442f2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea8deaf9faf8e9a4bb298779c8209b85a8791c76fc58156b8784e318ce3e9775,PodSandboxId:e5c40afae20e9b542c32cbeba2cecc7c3a547dae189233b33bd0607f7f6229a9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726497184715262694,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 654c784a-51c1-4379-9c13-6e5f5f9
4d520,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7540ab9ce3c8e72c560677e75f63b970a8dda6f27b387f22b39aae4479aa1c0,PodSandboxId:389bf46a882332456c51495db7c9f019460df0eb19d8d56189601cc2e5f0ea14,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726497185658890382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jgkxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e12c871-fd38-4ae1-a698-a9160206501e,},Annotations:map[string]
string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcaf55077af24c1cbcbc9f05f03026758fd21c5b54c359b2b58f5bc17dcf09c3,PodSandboxId:c7d733a3d418539f96154c804187efd10690bb7eb0a5cce4adc932b42fe12800,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726497185413156821,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fz5br,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfc8be18-d6f1-41f9-b28d-db2229de5783,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3253ee2431d01efa538d1dbed09f07fc9692d00358260f1a386383370df1444b,PodSandboxId:d2569a885d86e25932d9e81d00d508dc3ec3c1a7fae081d1774a71b30e69daac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726497184561404826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w96ml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc7d7c7-ff39-4003-820c-6a1577c86392,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66baf4a54c7adfd5ee25baa30b92a839a533167eb2ba8fb0c49730301f50fae0,PodSandboxId:ec4c9952367214b6e9b7b906c6c6574d27f11ba480b168f921f492160098481e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726497184465344070,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb720b25351bf5f0f0a6d9de88f88cd7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e3edc32c2d4b0a70193f3cc57cef336f8d9b81c795682e4f720845f4ab692d,PodSandboxId:54b05f324cc11319f86493ff9f632744ddbd7c1d5da86539cfda19751a9dddc8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc
06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726497184453234312,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 079d63d887f55e0943ff18a08f6f3fb2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c13f28968b3e43b85e1c87888e1bc3d61cd0513297b4f87bd82181602f9b4223,PodSandboxId:fb66ebc3d8a4ed790dcf7e3b72640f33615a3846f9e08b83bb02d3950a6d712d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b5526
0db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726497184446621296,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6b418ecf8faa082736d14e5f9e442f2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b102ccc084d032a58b46362e264006ee23f2ef3321eebcd1e675ed2af3cdba6,PodSandboxId:4e70e1e1ec7f559ddb6fe418327774df23871312584b9af133362ef592cb069e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fd
cc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726497184242473589,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 269d961d405d66d0d5eb65d253c12f85,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea50730f8da8bed225ece8a7a0e04a629bb54d15e60063c19849a70999c3bbad,PodSandboxId:fc8cc420f9970909ed609361f257bb2b373011712e2f7fcf4a94141d980c323d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726497163668222908,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jgkxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e12c871-fd38-4ae1-a698-a9160206501e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=09434fa7-557e-41f9-9a37-8f4bdde3f44c name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:33:28 kubernetes-upgrade-515632 crio[2278]: time="2024-09-16 14:33:28.262680609Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0358c960-30af-43f2-b9b4-c91e73af93a3 name=/runtime.v1.RuntimeService/Version
	Sep 16 14:33:28 kubernetes-upgrade-515632 crio[2278]: time="2024-09-16 14:33:28.262794559Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0358c960-30af-43f2-b9b4-c91e73af93a3 name=/runtime.v1.RuntimeService/Version
	Sep 16 14:33:28 kubernetes-upgrade-515632 crio[2278]: time="2024-09-16 14:33:28.265032039Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=03bcade9-f084-4cf8-99c5-bd1e7c5fff5a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:33:28 kubernetes-upgrade-515632 crio[2278]: time="2024-09-16 14:33:28.265501469Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726497208265473414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03bcade9-f084-4cf8-99c5-bd1e7c5fff5a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 14:33:28 kubernetes-upgrade-515632 crio[2278]: time="2024-09-16 14:33:28.266209489Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f8974f71-8850-4c80-b9e1-b8e061322a3e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:33:28 kubernetes-upgrade-515632 crio[2278]: time="2024-09-16 14:33:28.266286805Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f8974f71-8850-4c80-b9e1-b8e061322a3e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 14:33:28 kubernetes-upgrade-515632 crio[2278]: time="2024-09-16 14:33:28.266805834Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dcffeff0ecf2111bfb19f3f5c9b593c284285bc5eeb500782516bf9d03860eef,PodSandboxId:e5c40afae20e9b542c32cbeba2cecc7c3a547dae189233b33bd0607f7f6229a9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726497204649963035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 654c784a-51c1-4379-9c13-6e5f5f94d520,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:164ba6994824a10a446fa807802537be4b013773ba790b0c688c85d09c8736eb,PodSandboxId:d2569a885d86e25932d9e81d00d508dc3ec3c1a7fae081d1774a71b30e69daac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726497204649347498,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w96ml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc7d7c7-ff39-4003-820c-6a1577c86392,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c3d5150a928ebea87c64782fa16495c8bf775fef586390ae6e62ac221a7ee70,PodSandboxId:c7d733a3d418539f96154c804187efd10690bb7eb0a5cce4adc932b42fe12800,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726497204634790547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fz5br,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfc8be18-d6f1-41f9-b28d-db2229de5783,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913bacca919c012aa631de3e03e968fc3dbc59dd4dab46f669aae302938814bf,PodSandboxId:ec4c9952367214b6e9b7b906c6c6574d27f11ba480b168f921f492160098481e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726497200834930476,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: bb720b25351bf5f0f0a6d9de88f88cd7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28ec57c81e3f300f7ad8119afd23af7a4234f0c398a0300b943e5afa03d23c41,PodSandboxId:4e70e1e1ec7f559ddb6fe418327774df23871312584b9af133362ef592cb069e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726497200843375904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
69d961d405d66d0d5eb65d253c12f85,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66196a627280ce13f74c1b5287fe544ec5ab765c940b629c117ea8280593efcc,PodSandboxId:54b05f324cc11319f86493ff9f632744ddbd7c1d5da86539cfda19751a9dddc8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726497200852194384,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 079d63d887f55e0943ff18a08f6f3fb2,},A
nnotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204d897b54473bcce97d00508d7da7885d5d8e1da3fb173b851e0823fe1f8ba9,PodSandboxId:fb66ebc3d8a4ed790dcf7e3b72640f33615a3846f9e08b83bb02d3950a6d712d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726497200826275448,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6b418ecf8faa0
82736d14e5f9e442f2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea8deaf9faf8e9a4bb298779c8209b85a8791c76fc58156b8784e318ce3e9775,PodSandboxId:e5c40afae20e9b542c32cbeba2cecc7c3a547dae189233b33bd0607f7f6229a9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726497184715262694,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 654c784a-51c1-4379-9c13-6e5f5f9
4d520,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7540ab9ce3c8e72c560677e75f63b970a8dda6f27b387f22b39aae4479aa1c0,PodSandboxId:389bf46a882332456c51495db7c9f019460df0eb19d8d56189601cc2e5f0ea14,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726497185658890382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jgkxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e12c871-fd38-4ae1-a698-a9160206501e,},Annotations:map[string]
string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcaf55077af24c1cbcbc9f05f03026758fd21c5b54c359b2b58f5bc17dcf09c3,PodSandboxId:c7d733a3d418539f96154c804187efd10690bb7eb0a5cce4adc932b42fe12800,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726497185413156821,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fz5br,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfc8be18-d6f1-41f9-b28d-db2229de5783,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3253ee2431d01efa538d1dbed09f07fc9692d00358260f1a386383370df1444b,PodSandboxId:d2569a885d86e25932d9e81d00d508dc3ec3c1a7fae081d1774a71b30e69daac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726497184561404826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w96ml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc7d7c7-ff39-4003-820c-6a1577c86392,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66baf4a54c7adfd5ee25baa30b92a839a533167eb2ba8fb0c49730301f50fae0,PodSandboxId:ec4c9952367214b6e9b7b906c6c6574d27f11ba480b168f921f492160098481e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726497184465344070,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb720b25351bf5f0f0a6d9de88f88cd7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e3edc32c2d4b0a70193f3cc57cef336f8d9b81c795682e4f720845f4ab692d,PodSandboxId:54b05f324cc11319f86493ff9f632744ddbd7c1d5da86539cfda19751a9dddc8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc
06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726497184453234312,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 079d63d887f55e0943ff18a08f6f3fb2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c13f28968b3e43b85e1c87888e1bc3d61cd0513297b4f87bd82181602f9b4223,PodSandboxId:fb66ebc3d8a4ed790dcf7e3b72640f33615a3846f9e08b83bb02d3950a6d712d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b5526
0db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726497184446621296,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6b418ecf8faa082736d14e5f9e442f2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b102ccc084d032a58b46362e264006ee23f2ef3321eebcd1e675ed2af3cdba6,PodSandboxId:4e70e1e1ec7f559ddb6fe418327774df23871312584b9af133362ef592cb069e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fd
cc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726497184242473589,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-515632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 269d961d405d66d0d5eb65d253c12f85,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea50730f8da8bed225ece8a7a0e04a629bb54d15e60063c19849a70999c3bbad,PodSandboxId:fc8cc420f9970909ed609361f257bb2b373011712e2f7fcf4a94141d980c323d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726497163668222908,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jgkxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e12c871-fd38-4ae1-a698-a9160206501e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f8974f71-8850-4c80-b9e1-b8e061322a3e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dcffeff0ecf21       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       2                   e5c40afae20e9       storage-provisioner
	164ba6994824a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   3 seconds ago       Running             kube-proxy                2                   d2569a885d86e       kube-proxy-w96ml
	5c3d5150a928e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   c7d733a3d4185       coredns-7c65d6cfc9-fz5br
	66196a627280c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago       Running             etcd                      2                   54b05f324cc11       etcd-kubernetes-upgrade-515632
	28ec57c81e3f3       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   7 seconds ago       Running             kube-apiserver            2                   4e70e1e1ec7f5       kube-apiserver-kubernetes-upgrade-515632
	913bacca919c0       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   7 seconds ago       Running             kube-scheduler            2                   ec4c995236721       kube-scheduler-kubernetes-upgrade-515632
	204d897b54473       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   7 seconds ago       Running             kube-controller-manager   2                   fb66ebc3d8a4e       kube-controller-manager-kubernetes-upgrade-515632
	d7540ab9ce3c8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   22 seconds ago      Running             coredns                   1                   389bf46a88233       coredns-7c65d6cfc9-jgkxx
	fcaf55077af24       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   22 seconds ago      Exited              coredns                   1                   c7d733a3d4185       coredns-7c65d6cfc9-fz5br
	ea8deaf9faf8e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   23 seconds ago      Exited              storage-provisioner       1                   e5c40afae20e9       storage-provisioner
	3253ee2431d01       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   23 seconds ago      Exited              kube-proxy                1                   d2569a885d86e       kube-proxy-w96ml
	66baf4a54c7ad       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   23 seconds ago      Exited              kube-scheduler            1                   ec4c995236721       kube-scheduler-kubernetes-upgrade-515632
	28e3edc32c2d4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   23 seconds ago      Exited              etcd                      1                   54b05f324cc11       etcd-kubernetes-upgrade-515632
	c13f28968b3e4       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   23 seconds ago      Exited              kube-controller-manager   1                   fb66ebc3d8a4e       kube-controller-manager-kubernetes-upgrade-515632
	1b102ccc084d0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   24 seconds ago      Exited              kube-apiserver            1                   4e70e1e1ec7f5       kube-apiserver-kubernetes-upgrade-515632
	ea50730f8da8b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   44 seconds ago      Exited              coredns                   0                   fc8cc420f9970       coredns-7c65d6cfc9-jgkxx
	
	
	==> coredns [5c3d5150a928ebea87c64782fa16495c8bf775fef586390ae6e62ac221a7ee70] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [d7540ab9ce3c8e72c560677e75f63b970a8dda6f27b387f22b39aae4479aa1c0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [ea50730f8da8bed225ece8a7a0e04a629bb54d15e60063c19849a70999c3bbad] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fcaf55077af24c1cbcbc9f05f03026758fd21c5b54c359b2b58f5bc17dcf09c3] <==
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-515632
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-515632
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 14:32:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-515632
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 14:33:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 14:33:24 +0000   Mon, 16 Sep 2024 14:32:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 14:33:24 +0000   Mon, 16 Sep 2024 14:32:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 14:33:24 +0000   Mon, 16 Sep 2024 14:32:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 14:33:24 +0000   Mon, 16 Sep 2024 14:32:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.161
	  Hostname:    kubernetes-upgrade-515632
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 922538566a1f42c99516eb7942457068
	  System UUID:                92253856-6a1f-42c9-9516-eb7942457068
	  Boot ID:                    8886af99-c871-4fbf-bf8d-f93257675a4b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-fz5br                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     46s
	  kube-system                 coredns-7c65d6cfc9-jgkxx                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     46s
	  kube-system                 etcd-kubernetes-upgrade-515632                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         50s
	  kube-system                 kube-apiserver-kubernetes-upgrade-515632             250m (12%)    0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-515632    200m (10%)    0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 kube-proxy-w96ml                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-scheduler-kubernetes-upgrade-515632             100m (5%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 45s                kube-proxy       
	  Normal  NodeAllocatableEnforced  57s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  56s (x8 over 58s)  kubelet          Node kubernetes-upgrade-515632 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 58s)  kubelet          Node kubernetes-upgrade-515632 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x7 over 58s)  kubelet          Node kubernetes-upgrade-515632 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                node-controller  Node kubernetes-upgrade-515632 event: Registered Node kubernetes-upgrade-515632 in Controller
	  Normal  CIDRAssignmentFailed     46s                cidrAllocator    Node kubernetes-upgrade-515632 status is now: CIDRAssignmentFailed
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-515632 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-515632 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet          Node kubernetes-upgrade-515632 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-515632 event: Registered Node kubernetes-upgrade-515632 in Controller
	
	
	==> dmesg <==
	[  +1.578055] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.299003] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +0.065454] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068099] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.196574] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.175198] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.297690] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +4.109641] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[  +2.017577] systemd-fstab-generator[834]: Ignoring "noauto" option for root device
	[  +0.057872] kauditd_printk_skb: 158 callbacks suppressed
	[  +8.652565] systemd-fstab-generator[1230]: Ignoring "noauto" option for root device
	[  +0.118950] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.632014] kauditd_printk_skb: 102 callbacks suppressed
	[Sep16 14:33] systemd-fstab-generator[2204]: Ignoring "noauto" option for root device
	[  +0.197964] systemd-fstab-generator[2216]: Ignoring "noauto" option for root device
	[  +0.211590] systemd-fstab-generator[2230]: Ignoring "noauto" option for root device
	[  +0.157111] systemd-fstab-generator[2242]: Ignoring "noauto" option for root device
	[  +0.384740] systemd-fstab-generator[2270]: Ignoring "noauto" option for root device
	[  +0.887674] systemd-fstab-generator[2415]: Ignoring "noauto" option for root device
	[  +3.718285] kauditd_printk_skb: 229 callbacks suppressed
	[ +12.852004] systemd-fstab-generator[3433]: Ignoring "noauto" option for root device
	[  +5.223164] kauditd_printk_skb: 55 callbacks suppressed
	[  +0.653314] systemd-fstab-generator[3951]: Ignoring "noauto" option for root device
	
	
	==> etcd [28e3edc32c2d4b0a70193f3cc57cef336f8d9b81c795682e4f720845f4ab692d] <==
	{"level":"info","ts":"2024-09-16T14:33:06.984809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T14:33:06.984826Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 received MsgPreVoteResp from 59d4e9d626571860 at term 2"}
	{"level":"info","ts":"2024-09-16T14:33:06.984837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T14:33:06.984856Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 received MsgVoteResp from 59d4e9d626571860 at term 3"}
	{"level":"info","ts":"2024-09-16T14:33:06.984865Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T14:33:06.984872Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 59d4e9d626571860 elected leader 59d4e9d626571860 at term 3"}
	{"level":"info","ts":"2024-09-16T14:33:06.988899Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"59d4e9d626571860","local-member-attributes":"{Name:kubernetes-upgrade-515632 ClientURLs:[https://192.168.39.161:2379]}","request-path":"/0/members/59d4e9d626571860/attributes","cluster-id":"641f62d988bc06c1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T14:33:06.989122Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T14:33:06.990775Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T14:33:06.991291Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T14:33:06.995025Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.161:2379"}
	{"level":"info","ts":"2024-09-16T14:33:06.991527Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T14:33:06.992040Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T14:33:06.997971Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T14:33:06.999835Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T14:33:08.706778Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T14:33:08.706961Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-515632","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.161:2380"],"advertise-client-urls":["https://192.168.39.161:2379"]}
	{"level":"warn","ts":"2024-09-16T14:33:08.708752Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T14:33:08.708960Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T14:33:08.754306Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.161:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T14:33:08.754391Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.161:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T14:33:08.754456Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"59d4e9d626571860","current-leader-member-id":"59d4e9d626571860"}
	{"level":"info","ts":"2024-09-16T14:33:08.757977Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.161:2380"}
	{"level":"info","ts":"2024-09-16T14:33:08.758299Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.161:2380"}
	{"level":"info","ts":"2024-09-16T14:33:08.758341Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-515632","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.161:2380"],"advertise-client-urls":["https://192.168.39.161:2379"]}
	
	
	==> etcd [66196a627280ce13f74c1b5287fe544ec5ab765c940b629c117ea8280593efcc] <==
	{"level":"info","ts":"2024-09-16T14:33:21.199408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 switched to configuration voters=(6473055670413760608)"}
	{"level":"info","ts":"2024-09-16T14:33:21.199472Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"641f62d988bc06c1","local-member-id":"59d4e9d626571860","added-peer-id":"59d4e9d626571860","added-peer-peer-urls":["https://192.168.39.161:2380"]}
	{"level":"info","ts":"2024-09-16T14:33:21.199580Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"641f62d988bc06c1","local-member-id":"59d4e9d626571860","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T14:33:21.199625Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T14:33:21.203198Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T14:33:21.203278Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.161:2380"}
	{"level":"info","ts":"2024-09-16T14:33:21.203441Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.161:2380"}
	{"level":"info","ts":"2024-09-16T14:33:21.204430Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"59d4e9d626571860","initial-advertise-peer-urls":["https://192.168.39.161:2380"],"listen-peer-urls":["https://192.168.39.161:2380"],"advertise-client-urls":["https://192.168.39.161:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.161:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T14:33:21.204506Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T14:33:22.581432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-16T14:33:22.581584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-16T14:33:22.581686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 received MsgPreVoteResp from 59d4e9d626571860 at term 3"}
	{"level":"info","ts":"2024-09-16T14:33:22.581729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 became candidate at term 4"}
	{"level":"info","ts":"2024-09-16T14:33:22.581763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 received MsgVoteResp from 59d4e9d626571860 at term 4"}
	{"level":"info","ts":"2024-09-16T14:33:22.581790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 became leader at term 4"}
	{"level":"info","ts":"2024-09-16T14:33:22.581821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 59d4e9d626571860 elected leader 59d4e9d626571860 at term 4"}
	{"level":"info","ts":"2024-09-16T14:33:22.588324Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"59d4e9d626571860","local-member-attributes":"{Name:kubernetes-upgrade-515632 ClientURLs:[https://192.168.39.161:2379]}","request-path":"/0/members/59d4e9d626571860/attributes","cluster-id":"641f62d988bc06c1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T14:33:22.588523Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T14:33:22.588882Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T14:33:22.589518Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T14:33:22.590453Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T14:33:22.590701Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T14:33:22.590763Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T14:33:22.591216Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T14:33:22.591961Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.161:2379"}
	
	
	==> kernel <==
	 14:33:28 up 1 min,  0 users,  load average: 1.51, 0.48, 0.17
	Linux kubernetes-upgrade-515632 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1b102ccc084d032a58b46362e264006ee23f2ef3321eebcd1e675ed2af3cdba6] <==
	W0916 14:33:17.981401       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 14:33:18.005181       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 14:33:18.025188       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 14:33:18.032585       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 14:33:18.047162       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 14:33:18.077273       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 14:33:18.086774       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 14:33:18.095306       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 14:33:18.128336       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 14:33:18.192256       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 14:33:18.217849       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 14:33:18.250992       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 14:33:18.255568       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 14:33:18.274252       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 14:33:18.339032       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 14:33:18.378076       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 14:33:18.399552       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 14:33:18.488417       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 14:33:18.489969       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 14:33:18.497972       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 14:33:18.563472       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 14:33:18.563472       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 14:33:18.686264       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 14:33:18.872946       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 14:33:18.876630       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [28ec57c81e3f300f7ad8119afd23af7a4234f0c398a0300b943e5afa03d23c41] <==
	I0916 14:33:24.114728       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 14:33:24.118198       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 14:33:24.126408       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 14:33:24.134592       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 14:33:24.134625       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 14:33:24.134933       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 14:33:24.134972       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 14:33:24.135010       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 14:33:24.135531       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 14:33:24.136115       1 aggregator.go:171] initial CRD sync complete...
	I0916 14:33:24.138207       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 14:33:24.138232       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 14:33:24.138255       1 cache.go:39] Caches are synced for autoregister controller
	I0916 14:33:24.140530       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 14:33:24.140558       1 policy_source.go:224] refreshing policies
	I0916 14:33:24.145722       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 14:33:24.210753       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 14:33:25.021238       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 14:33:25.655565       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 14:33:25.666360       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 14:33:25.711196       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 14:33:25.835055       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 14:33:25.842079       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 14:33:27.013133       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 14:33:27.791832       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [204d897b54473bcce97d00508d7da7885d5d8e1da3fb173b851e0823fe1f8ba9] <==
	I0916 14:33:27.512303       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0916 14:33:27.530691       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0916 14:33:27.532046       1 shared_informer.go:320] Caches are synced for daemon sets
	I0916 14:33:27.533329       1 shared_informer.go:320] Caches are synced for ephemeral
	I0916 14:33:27.541333       1 shared_informer.go:320] Caches are synced for node
	I0916 14:33:27.541447       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0916 14:33:27.541543       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 14:33:27.541566       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0916 14:33:27.541626       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0916 14:33:27.541878       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-515632"
	I0916 14:33:27.580759       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 14:33:27.580953       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0916 14:33:27.581039       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-515632"
	I0916 14:33:27.586951       1 shared_informer.go:320] Caches are synced for GC
	I0916 14:33:27.588621       1 shared_informer.go:320] Caches are synced for TTL
	I0916 14:33:27.595281       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 14:33:27.596621       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 14:33:27.600380       1 shared_informer.go:320] Caches are synced for deployment
	I0916 14:33:27.633938       1 shared_informer.go:320] Caches are synced for disruption
	I0916 14:33:27.642286       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 14:33:27.650316       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="183.773552ms"
	I0916 14:33:27.651021       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="71.183µs"
	I0916 14:33:28.111424       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 14:33:28.117899       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 14:33:28.117942       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [c13f28968b3e43b85e1c87888e1bc3d61cd0513297b4f87bd82181602f9b4223] <==
	I0916 14:33:06.636750       1 serving.go:386] Generated self-signed cert in-memory
	I0916 14:33:07.083552       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0916 14:33:07.083590       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 14:33:07.086369       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0916 14:33:07.086854       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 14:33:07.087300       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 14:33:07.090980       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-proxy [164ba6994824a10a446fa807802537be4b013773ba790b0c688c85d09c8736eb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 14:33:25.066987       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 14:33:25.078735       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.161"]
	E0916 14:33:25.078806       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 14:33:25.123780       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 14:33:25.123867       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 14:33:25.123889       1 server_linux.go:169] "Using iptables Proxier"
	I0916 14:33:25.128046       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 14:33:25.128705       1 server.go:483] "Version info" version="v1.31.1"
	I0916 14:33:25.128744       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 14:33:25.130414       1 config.go:199] "Starting service config controller"
	I0916 14:33:25.130483       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 14:33:25.130512       1 config.go:105] "Starting endpoint slice config controller"
	I0916 14:33:25.130516       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 14:33:25.131142       1 config.go:328] "Starting node config controller"
	I0916 14:33:25.131181       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 14:33:25.231524       1 shared_informer.go:320] Caches are synced for node config
	I0916 14:33:25.231617       1 shared_informer.go:320] Caches are synced for service config
	I0916 14:33:25.231814       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [3253ee2431d01efa538d1dbed09f07fc9692d00358260f1a386383370df1444b] <==
	
	
	==> kube-scheduler [66baf4a54c7adfd5ee25baa30b92a839a533167eb2ba8fb0c49730301f50fae0] <==
	I0916 14:33:06.356781       1 serving.go:386] Generated self-signed cert in-memory
	W0916 14:33:08.524541       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 14:33:08.524721       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 14:33:08.524816       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 14:33:08.524847       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 14:33:08.574351       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 14:33:08.576728       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0916 14:33:08.576973       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0916 14:33:08.583475       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 14:33:08.583555       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 14:33:08.583683       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	I0916 14:33:08.590815       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0916 14:33:08.591278       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	I0916 14:33:08.591423       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 14:33:08.594204       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0916 14:33:08.591497       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0916 14:33:08.594428       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [913bacca919c012aa631de3e03e968fc3dbc59dd4dab46f669aae302938814bf] <==
	I0916 14:33:22.457786       1 serving.go:386] Generated self-signed cert in-memory
	W0916 14:33:24.071220       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 14:33:24.071277       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 14:33:24.071288       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 14:33:24.071294       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 14:33:24.145862       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 14:33:24.145934       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 14:33:24.148061       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 14:33:24.148070       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 14:33:24.148398       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 14:33:24.148097       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 14:33:24.249881       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 14:33:20 kubernetes-upgrade-515632 kubelet[3440]: E0916 14:33:20.696787    3440 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.161:8443: connect: connection refused" node="kubernetes-upgrade-515632"
	Sep 16 14:33:20 kubernetes-upgrade-515632 kubelet[3440]: I0916 14:33:20.803594    3440 scope.go:117] "RemoveContainer" containerID="1b102ccc084d032a58b46362e264006ee23f2ef3321eebcd1e675ed2af3cdba6"
	Sep 16 14:33:20 kubernetes-upgrade-515632 kubelet[3440]: I0916 14:33:20.803710    3440 scope.go:117] "RemoveContainer" containerID="28e3edc32c2d4b0a70193f3cc57cef336f8d9b81c795682e4f720845f4ab692d"
	Sep 16 14:33:20 kubernetes-upgrade-515632 kubelet[3440]: I0916 14:33:20.805271    3440 scope.go:117] "RemoveContainer" containerID="c13f28968b3e43b85e1c87888e1bc3d61cd0513297b4f87bd82181602f9b4223"
	Sep 16 14:33:20 kubernetes-upgrade-515632 kubelet[3440]: I0916 14:33:20.805748    3440 scope.go:117] "RemoveContainer" containerID="66baf4a54c7adfd5ee25baa30b92a839a533167eb2ba8fb0c49730301f50fae0"
	Sep 16 14:33:20 kubernetes-upgrade-515632 kubelet[3440]: E0916 14:33:20.919940    3440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-515632?timeout=10s\": dial tcp 192.168.39.161:8443: connect: connection refused" interval="800ms"
	Sep 16 14:33:21 kubernetes-upgrade-515632 kubelet[3440]: I0916 14:33:21.099497    3440 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-515632"
	Sep 16 14:33:21 kubernetes-upgrade-515632 kubelet[3440]: E0916 14:33:21.100564    3440 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.161:8443: connect: connection refused" node="kubernetes-upgrade-515632"
	Sep 16 14:33:21 kubernetes-upgrade-515632 kubelet[3440]: W0916 14:33:21.206794    3440 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-515632&limit=500&resourceVersion=0": dial tcp 192.168.39.161:8443: connect: connection refused
	Sep 16 14:33:21 kubernetes-upgrade-515632 kubelet[3440]: E0916 14:33:21.206881    3440 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-515632&limit=500&resourceVersion=0\": dial tcp 192.168.39.161:8443: connect: connection refused" logger="UnhandledError"
	Sep 16 14:33:21 kubernetes-upgrade-515632 kubelet[3440]: I0916 14:33:21.465449    3440 scope.go:117] "RemoveContainer" containerID="7efef14d3261ab89e1e2abef06f7973f94e9917b1b38ee5b1e6337821c67088f"
	Sep 16 14:33:21 kubernetes-upgrade-515632 kubelet[3440]: I0916 14:33:21.902139    3440 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-515632"
	Sep 16 14:33:24 kubernetes-upgrade-515632 kubelet[3440]: I0916 14:33:24.176519    3440 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-515632"
	Sep 16 14:33:24 kubernetes-upgrade-515632 kubelet[3440]: I0916 14:33:24.176975    3440 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-515632"
	Sep 16 14:33:24 kubernetes-upgrade-515632 kubelet[3440]: I0916 14:33:24.177107    3440 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 14:33:24 kubernetes-upgrade-515632 kubelet[3440]: I0916 14:33:24.178326    3440 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 14:33:24 kubernetes-upgrade-515632 kubelet[3440]: I0916 14:33:24.292982    3440 apiserver.go:52] "Watching apiserver"
	Sep 16 14:33:24 kubernetes-upgrade-515632 kubelet[3440]: I0916 14:33:24.312136    3440 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 14:33:24 kubernetes-upgrade-515632 kubelet[3440]: I0916 14:33:24.385236    3440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7dc7d7c7-ff39-4003-820c-6a1577c86392-xtables-lock\") pod \"kube-proxy-w96ml\" (UID: \"7dc7d7c7-ff39-4003-820c-6a1577c86392\") " pod="kube-system/kube-proxy-w96ml"
	Sep 16 14:33:24 kubernetes-upgrade-515632 kubelet[3440]: I0916 14:33:24.385369    3440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7dc7d7c7-ff39-4003-820c-6a1577c86392-lib-modules\") pod \"kube-proxy-w96ml\" (UID: \"7dc7d7c7-ff39-4003-820c-6a1577c86392\") " pod="kube-system/kube-proxy-w96ml"
	Sep 16 14:33:24 kubernetes-upgrade-515632 kubelet[3440]: I0916 14:33:24.385470    3440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/654c784a-51c1-4379-9c13-6e5f5f94d520-tmp\") pod \"storage-provisioner\" (UID: \"654c784a-51c1-4379-9c13-6e5f5f94d520\") " pod="kube-system/storage-provisioner"
	Sep 16 14:33:24 kubernetes-upgrade-515632 kubelet[3440]: I0916 14:33:24.597795    3440 scope.go:117] "RemoveContainer" containerID="ea8deaf9faf8e9a4bb298779c8209b85a8791c76fc58156b8784e318ce3e9775"
	Sep 16 14:33:24 kubernetes-upgrade-515632 kubelet[3440]: I0916 14:33:24.598399    3440 scope.go:117] "RemoveContainer" containerID="3253ee2431d01efa538d1dbed09f07fc9692d00358260f1a386383370df1444b"
	Sep 16 14:33:24 kubernetes-upgrade-515632 kubelet[3440]: I0916 14:33:24.598991    3440 scope.go:117] "RemoveContainer" containerID="fcaf55077af24c1cbcbc9f05f03026758fd21c5b54c359b2b58f5bc17dcf09c3"
	Sep 16 14:33:26 kubernetes-upgrade-515632 kubelet[3440]: I0916 14:33:26.550485    3440 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [dcffeff0ecf2111bfb19f3f5c9b593c284285bc5eeb500782516bf9d03860eef] <==
	I0916 14:33:24.935353       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 14:33:24.946601       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 14:33:24.951732       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [ea8deaf9faf8e9a4bb298779c8209b85a8791c76fc58156b8784e318ce3e9775] <==
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 88 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc0001b82d0, 0x0)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc0001b82c0)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc0000765a0, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc0001f8780, 0x18e5530, 0xc0004394c0, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00030f020)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00030f020, 0x18b3d60, 0xc00038c720, 0x1, 0xc000082180)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00030f020, 0x3b9aca00, 0x0, 0x1, 0xc000082180)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc00030f020, 0x3b9aca00, 0xc000082180)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 14:33:27.641166  765798 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19652-713072/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-515632 -n kubernetes-upgrade-515632
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-515632 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-515632" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-515632
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-515632: (1.636731531s)
--- FAIL: TestKubernetesUpgrade (366.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (7200.054s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.213:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.213:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.213:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.213:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.213:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.213:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.213:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.213:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.213:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.213:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.213:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.213:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.213:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.213:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.213:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.213:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.213:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.213:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.213:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.213:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.213:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.213:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.213:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.213:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.213:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.213:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.213:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.213:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.213:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.213:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.213:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.213:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.213:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.213:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.213:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.213:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.213:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.213:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.213:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.213:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.213:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.213:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.213:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.213:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.213:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.213:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.213:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.213:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.213:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.213:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.213:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.213:8443: connect: connection refused
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (18m51s)
		TestStartStop (23m31s)
		TestStartStop/group/default-k8s-diff-port (16m55s)
		TestStartStop/group/default-k8s-diff-port/serial (16m55s)
		TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (4m18s)
		TestStartStop/group/embed-certs (18m41s)
		TestStartStop/group/embed-certs/serial (18m41s)
		TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (3m37s)
		TestStartStop/group/no-preload (18m44s)
		TestStartStop/group/no-preload/serial (18m44s)
		TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (4m30s)
		TestStartStop/group/old-k8s-version (18m44s)
		TestStartStop/group/old-k8s-version/serial (18m44s)
		TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (25s)

                                                
                                                
goroutine 3422 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2373 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 18 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc0008d29c0, 0xc0009c3bc8)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
testing.runTests(0xc0004fc2d0, {0x4cf86a0, 0x2b, 0x2b}, {0xffffffffffffffff?, 0x411b30?, 0x4db6de0?})
	/usr/local/go/src/testing/testing.go:2166 +0x43d
testing.(*M).Run(0xc00081b040)
	/usr/local/go/src/testing/testing.go:2034 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc00081b040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0xa8

                                                
                                                
goroutine 7 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0004bcc80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2424 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x378f380, 0xc000064700}, 0xc001417f50, 0xc001417f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x378f380, 0xc000064700}, 0x40?, 0xc001417f50, 0xc001417f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x378f380?, 0xc000064700?}, 0xc0015e8b60?, 0x55bf60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0013917d0?, 0x5a1aa4?, 0xc000113340?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2457
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3315 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x37859e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3314
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 1671 [chan receive, 20 minutes]:
testing.(*T).Run(0xc0008d21a0, {0x2925a4b?, 0x55b79c?}, 0xc00013aa50)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0008d21a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd3
testing.tRunner(0xc0008d21a0, 0x3410a38)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 24 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0xff
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 23
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x167

                                                
                                                
goroutine 202 [IO wait, 79 minutes]:
internal/poll.runtime_pollWait(0x7f9170797e88, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc000a1af80?, 0x2c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000a1af80)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc000a1af80)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc000231dc0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc000231dc0)
	/usr/local/go/src/net/tcpsock.go:372 +0x30
net/http.(*Server).Serve(0xc000870d20, {0x3781d70, 0xc000231dc0})
	/usr/local/go/src/net/http/server.go:3330 +0x30c
net/http.(*Server).ListenAndServe(0xc000870d20)
	/usr/local/go/src/net/http/server.go:3259 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc001c884e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 199
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 709 [select, 76 minutes]:
net/http.(*persistConn).readLoop(0xc0013727e0)
	/usr/local/go/src/net/http/transport.go:2325 +0xca5
created by net/http.(*Transport).dialConn in goroutine 707
	/usr/local/go/src/net/http/transport.go:1874 +0x154f

                                                
                                                
goroutine 2111 [chan receive, 18 minutes]:
testing.(*testContext).waitParallel(0xc0007be730)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc001c89040)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001c89040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001c89040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc001c89040, 0xc000a1a000)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2110
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 335 [chan send, 76 minutes]:
os/exec.(*Cmd).watchCtx(0xc0013aea80, 0xc001ab6230)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 334
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 3330 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x378f380, 0xc000064700}, 0xc001d01f50, 0xc001d01f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x378f380, 0xc000064700}, 0x60?, 0xc001d01f50, 0xc001d01f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x378f380?, 0xc000064700?}, 0xc001faa340?, 0x55bf60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00138d7d0?, 0x5a1aa4?, 0xc001ab6b60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3316
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 364 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x37859e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 323
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2250 [chan receive, 4 minutes]:
testing.(*T).Run(0xc0005341a0, {0x2951f4f?, 0xc00193f570?}, 0xc000a1b680)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0005341a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0005341a0, 0xc000a1ae80)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1827
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2561 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x378f170, 0xc000462700}, {0x3782400, 0xc00040b020}, 0x1, 0x0, 0xc001315c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x378f170?, 0xc000463f10?}, 0x3b9aca00, 0xc00006fe10?, 0x1, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x378f170, 0xc000463f10}, 0xc001faa000, {0xc0017e4c48, 0x11}, {0x294c130, 0x14}, {0x29640b7, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x378f170, 0xc000463f10}, 0xc001faa000, {0xc0017e4c48, 0x11}, {0x2930d97?, 0xc001c4ff60?}, {0x55b653?, 0x4b1aaf?}, {0xc0000ea200, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x139
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001faa000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001faa000, 0xc0004bce80)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2320
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 446 [chan send, 76 minutes]:
os/exec.(*Cmd).watchCtx(0xc0019fc780, 0xc00198abd0)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 283
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 325 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x378f380, 0xc000064700}, 0xc00138b750, 0xc001384f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x378f380, 0xc000064700}, 0xe0?, 0xc00138b750, 0xc00138b798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x378f380?, 0xc000064700?}, 0xc0008d21a0?, 0x55bf60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00138b7d0?, 0x5a1aa4?, 0xc0001135e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 365
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2310 [chan receive, 18 minutes]:
testing.(*testContext).waitParallel(0xc0007be730)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc001faa4e0)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001faa4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001faa4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc001faa4e0, 0xc000888080)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2110
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2487 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x378f380, 0xc000064700}, 0xc001881750, 0xc0000aaf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x378f380, 0xc000064700}, 0x90?, 0xc001881750, 0xc001881798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x378f380?, 0xc000064700?}, 0xc0015e8680?, 0x55bf60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x5a1a45?, 0xc0017bd200?, 0xc004a5fb90?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2469
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3316 [chan receive]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001a0e700, 0xc000064700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3314
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 365 [chan receive, 77 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000496e80, 0xc000064700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 323
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 326 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 325
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 324 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000496e50, 0x23)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001382d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37aaac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000496e80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000135b80, {0x3767e60, 0xc004a57470}, 0x1, 0xc000064700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000135b80, 0x3b9aca00, 0x0, 0x1, 0xc000064700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 365
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 1790 [chan receive, 18 minutes]:
testing.(*T).Run(0xc0008d2d00, {0x2927010?, 0x0?}, 0xc0004bd180)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0008d2d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0008d2d00, 0xc000496180)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1789
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 710 [select, 76 minutes]:
net/http.(*persistConn).writeLoop(0xc0013727e0)
	/usr/local/go/src/net/http/transport.go:2519 +0xe7
created by net/http.(*Transport).dialConn in goroutine 707
	/usr/local/go/src/net/http/transport.go:1875 +0x15a5

                                                
                                                
goroutine 2488 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2487
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2423 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000496690, 0x3)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001307d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37aaac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0004966c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007c0010, {0x3767e60, 0xc001660000}, 0x1, 0xc000064700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007c0010, 0x3b9aca00, 0x0, 0x1, 0xc000064700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2457
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2469 [chan receive, 16 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00029ce80, 0xc000064700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2482
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2622 [IO wait]:
internal/poll.runtime_pollWait(0x7f9170797960, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0000d1100?, 0xc001802000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0000d1100, {0xc001802000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc0000d1100, {0xc001802000?, 0x10?, 0xc004a548a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000a14150, {0xc001802000?, 0xc00180205f?, 0x70?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc001e84c18, {0xc001802000?, 0x0?, 0xc001e84c18?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc00181ad38, {0x3768660, 0xc001e84c18})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc00181aa88, {0x7f91703beff8, 0xc001a7a4e0}, 0xc004a54a10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc00181aa88, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc00181aa88, {0xc00158f000, 0x1000, 0xc00141fc00?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc001b105a0, {0xc00028c660, 0x9, 0x4cb2c70?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3766900, 0xc001b105a0}, {0xc00028c660, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc00028c660, 0x9, 0x47b965?}, {0x3766900?, 0xc001b105a0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc00028c620)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc004a54fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc0017bcc00)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2621
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 632 [chan send, 76 minutes]:
os/exec.(*Cmd).watchCtx(0xc000918a80, 0xc001937650)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 631
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 2200 [chan receive, 18 minutes]:
testing.(*testContext).waitParallel(0xc0007be730)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0015e8b60)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015e8b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0015e8b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0015e8b60, 0xc0004bcb80)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2110
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2311 [chan receive, 18 minutes]:
testing.(*testContext).waitParallel(0xc0007be730)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc001faa9c0)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001faa9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001faa9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc001faa9c0, 0xc000888100)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2110
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2705 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x378f170, 0xc00049a4d0}, {0x3782400, 0xc00165a120}, 0x1, 0x0, 0xc0013cfc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x378f170?, 0xc00045c690?}, 0x3b9aca00, 0xc001311e10?, 0x1, 0xc001311c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x378f170, 0xc00045c690}, 0xc001c891e0, {0xc001a7e240, 0x12}, {0x294c130, 0x14}, {0x29640b7, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x378f170, 0xc00045c690}, 0xc001c891e0, {0xc001a7e240, 0x12}, {0x2932ff6?, 0xc000486f60?}, {0x55b653?, 0x4b1aaf?}, {0xc0001b8f00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x139
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001c891e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001c891e0, 0xc000a1b680)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2250
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2486 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc00029ce50, 0x3)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0009c1d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37aaac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00029ce80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009bd310, {0x3767e60, 0xc001f72510}, 0x1, 0xc000064700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0009bd310, 0x3b9aca00, 0x0, 0x1, 0xc000064700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2469
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2468 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x37859e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2482
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 1789 [chan receive, 24 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc00015dd40, 0x3410c78)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 1740
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3331 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3330
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2457 [chan receive, 16 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0004966c0, 0xc000064700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2452
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2456 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x37859e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2452
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2425 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2424
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2603 [IO wait]:
internal/poll.runtime_pollWait(0x7f9170797120, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0004bde80?, 0xc001a86000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0004bde80, {0xc001a86000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc0004bde80, {0xc001a86000?, 0x10?, 0xc004a558a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0018d2088, {0xc001a86000?, 0xc001a86005?, 0x70?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc00180c288, {0xc001a86000?, 0x0?, 0xc00180c288?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc0018fa9b8, {0x3768660, 0xc00180c288})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0018fa708, {0x7f91703beff8, 0xc0019502d0}, 0xc004a55a10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0018fa708, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc0018fa708, {0xc0007ca000, 0x1000, 0xc001805a40?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc001a2d3e0, {0xc001a84200, 0x9, 0x4cb2c70?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3766900, 0xc001a2d3e0}, {0xc001a84200, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc001a84200, 0x9, 0x47b965?}, {0x3766900?, 0xc001a2d3e0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc001a841c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc004a55fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc001992000)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2602
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 2584 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x378f170, 0xc0005b2460}, {0x3782400, 0xc001575280}, 0x1, 0x0, 0xc001311c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x378f170?, 0xc0000e93b0?}, 0x3b9aca00, 0xc001311e10?, 0x1, 0xc001311c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x378f170, 0xc0000e93b0}, 0xc001c88680, {0xc000480e20, 0x1c}, {0x294c130, 0x14}, {0x29640b7, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x378f170, 0xc0000e93b0}, 0xc001c88680, {0xc000480e20, 0x1c}, {0x294f08e?, 0xc001c52f60?}, {0x55b653?, 0x4b1aaf?}, {0xc0000ea300, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x139
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001c88680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001c88680, 0xc0000d0080)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2390
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2110 [chan receive, 18 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc001c88ea0, 0xc00013aa50)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 1671
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2634 [IO wait]:
internal/poll.runtime_pollWait(0x7f91703bece0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0004bd480?, 0xc001a87000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0004bd480, {0xc001a87000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc0004bd480, {0xc001a87000?, 0x9f65f2?, 0xc0018229a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0018d2000, {0xc001a87000?, 0xc00135c140?, 0xc001a87005?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc00180c210, {0xc001a87000?, 0x0?, 0xc00180c210?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc00181b0b8, {0x3768660, 0xc00180c210})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc00181ae08, {0x3767920, 0xc0018d2000}, 0xc001822a10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc00181ae08, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc00181ae08, {0xc00139a000, 0x1000, 0xc00141e700?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc001a2d800, {0xc0009003c0, 0x9, 0x4cb2c70?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3766900, 0xc001a2d800}, {0xc0009003c0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0009003c0, 0x9, 0x47b965?}, {0x3766900?, 0xc001a2d800?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc000900380)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc001822fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc00137a000)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2633
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 2297 [chan receive]:
testing.(*T).Run(0xc001c88340, {0x2951f4f?, 0xc000097d70?}, 0xc0017a2000)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001c88340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001c88340, 0xc0004bd180)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1790
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1740 [chan receive, 24 minutes]:
testing.(*T).Run(0xc0008d3520, {0x2925a4b?, 0x55b653?}, 0x3410c78)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0008d3520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0008d3520, 0x3410a80)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2199 [chan receive, 18 minutes]:
testing.(*testContext).waitParallel(0xc0007be730)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0015e89c0)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015e89c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0015e89c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0015e89c0, 0xc0004bcb00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2110
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3314 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x378f170, 0xc0000e9570}, {0x3782400, 0xc00040a2e0}, 0x1, 0x0, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x378f170?, 0xc00049b340?}, 0x3b9aca00, 0xc00006fe10?, 0x1, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x378f170, 0xc00049b340}, 0xc001c89380, {0xc0007a35d8, 0x16}, {0x294c130, 0x14}, {0x29640b7, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x378f170, 0xc00049b340}, 0xc001c89380, {0xc0007a35d8, 0x16}, {0x293d16c?, 0xc001883760?}, {0x55b653?, 0x4b1aaf?}, {0xc000919b00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x139
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001c89380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001c89380, 0xc0017a2000)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2297
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2320 [chan receive, 6 minutes]:
testing.(*T).Run(0xc001faad00, {0x2951f4f?, 0xc001886570?}, 0xc0004bce80)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001faad00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001faad00, 0xc000888500)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1793
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2198 [chan receive, 18 minutes]:
testing.(*testContext).waitParallel(0xc0007be730)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0015e8820)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015e8820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0015e8820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0015e8820, 0xc0004bc480)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2110
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3217 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001a0e6d0, 0x0)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001c4d580?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37aaac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001a0e700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000824e40, {0x3767e60, 0xc000922e40}, 0x1, 0xc000064700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000824e40, 0x3b9aca00, 0x0, 0x1, 0xc000064700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3316
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2201 [chan receive, 18 minutes]:
testing.(*testContext).waitParallel(0xc0007be730)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0015e8d00)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015e8d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0015e8d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0015e8d00, 0xc0004bcc00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2110
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1791 [chan receive, 24 minutes]:
testing.(*testContext).waitParallel(0xc0007be730)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0008d3040)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008d3040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0008d3040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0008d3040, 0xc0004961c0)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1789
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1792 [chan receive, 18 minutes]:
testing.(*T).Run(0xc0008d31e0, {0x2927010?, 0x0?}, 0xc000888680)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0008d31e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0008d31e0, 0xc000496200)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1789
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1793 [chan receive, 18 minutes]:
testing.(*T).Run(0xc0008d3380, {0x2927010?, 0x0?}, 0xc000888500)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0008d3380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0008d3380, 0xc000496240)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1789
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2390 [chan receive, 6 minutes]:
testing.(*T).Run(0xc001fab040, {0x2951f4f?, 0xc001390d70?}, 0xc0000d0080)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001fab040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001fab040, 0xc000888680)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1792
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1827 [chan receive, 18 minutes]:
testing.(*T).Run(0xc0008d3d40, {0x2927010?, 0x0?}, 0xc000a1ae80)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0008d3d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0008d3d40, 0xc000496300)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1789
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                    

Test pass (171/213)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 12.98
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 6.53
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.13
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.59
22 TestOffline 79.74
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
28 TestCertOptions 56.27
29 TestCertExpiration 308.26
31 TestForceSystemdFlag 49.87
32 TestForceSystemdEnv 60.37
34 TestKVMDriverInstallOrUpdate 1.17
38 TestErrorSpam/setup 43.3
39 TestErrorSpam/start 0.33
40 TestErrorSpam/status 0.72
41 TestErrorSpam/pause 1.56
42 TestErrorSpam/unpause 1.68
43 TestErrorSpam/stop 4.04
46 TestFunctional/serial/CopySyncFile 0
47 TestFunctional/serial/StartWithProxy 53.02
48 TestFunctional/serial/AuditLog 0
49 TestFunctional/serial/SoftStart 40.65
50 TestFunctional/serial/KubeContext 0.04
51 TestFunctional/serial/KubectlGetPods 0.07
54 TestFunctional/serial/CacheCmd/cache/add_remote 3.41
55 TestFunctional/serial/CacheCmd/cache/add_local 1.07
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
57 TestFunctional/serial/CacheCmd/cache/list 0.04
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
59 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
60 TestFunctional/serial/CacheCmd/cache/delete 0.09
61 TestFunctional/serial/MinikubeKubectlCmd 0.12
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
63 TestFunctional/serial/ExtraConfig 32.32
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 1.34
66 TestFunctional/serial/LogsFileCmd 1.31
67 TestFunctional/serial/InvalidService 4.5
69 TestFunctional/parallel/ConfigCmd 0.34
70 TestFunctional/parallel/DashboardCmd 25.7
71 TestFunctional/parallel/DryRun 0.27
72 TestFunctional/parallel/InternationalLanguage 0.16
73 TestFunctional/parallel/StatusCmd 0.9
77 TestFunctional/parallel/ServiceCmdConnect 7.93
78 TestFunctional/parallel/AddonsCmd 0.12
79 TestFunctional/parallel/PersistentVolumeClaim 41.52
81 TestFunctional/parallel/SSHCmd 0.71
82 TestFunctional/parallel/CpCmd 1.49
83 TestFunctional/parallel/MySQL 25.35
84 TestFunctional/parallel/FileSync 0.22
85 TestFunctional/parallel/CertSync 1.39
89 TestFunctional/parallel/NodeLabels 0.07
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.53
93 TestFunctional/parallel/License 0.18
94 TestFunctional/parallel/ServiceCmd/DeployApp 11.22
95 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
96 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
97 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
98 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
99 TestFunctional/parallel/MountCmd/any-port 20.77
100 TestFunctional/parallel/ProfileCmd/profile_list 0.34
101 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
102 TestFunctional/parallel/ServiceCmd/List 0.89
103 TestFunctional/parallel/ServiceCmd/JSONOutput 0.87
104 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
105 TestFunctional/parallel/ServiceCmd/Format 0.34
106 TestFunctional/parallel/ServiceCmd/URL 0.48
107 TestFunctional/parallel/Version/short 0.05
108 TestFunctional/parallel/Version/components 0.7
109 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
110 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
111 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
112 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
113 TestFunctional/parallel/ImageCommands/ImageBuild 2.92
114 TestFunctional/parallel/ImageCommands/Setup 0.48
115 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.95
116 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.02
117 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.26
118 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.92
119 TestFunctional/parallel/ImageCommands/ImageRemove 0.91
120 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.81
121 TestFunctional/parallel/MountCmd/specific-port 1.95
122 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.8
123 TestFunctional/parallel/MountCmd/VerifyCleanup 1.43
133 TestFunctional/delete_echo-server_images 0.03
134 TestFunctional/delete_my-image_image 0.01
135 TestFunctional/delete_minikube_cached_images 0.01
139 TestMultiControlPlane/serial/StartCluster 190.5
140 TestMultiControlPlane/serial/DeployApp 5.63
141 TestMultiControlPlane/serial/PingHostFromPods 1.26
142 TestMultiControlPlane/serial/AddWorkerNode 57.47
143 TestMultiControlPlane/serial/NodeLabels 0.06
144 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.51
145 TestMultiControlPlane/serial/CopyFile 12.43
147 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.49
149 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.39
151 TestMultiControlPlane/serial/DeleteSecondaryNode 16.67
152 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
154 TestMultiControlPlane/serial/RestartCluster 444.06
155 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.4
156 TestMultiControlPlane/serial/AddSecondaryNode 72.87
157 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.52
161 TestJSONOutput/start/Command 87.77
162 TestJSONOutput/start/Audit 0
164 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/pause/Command 0.68
168 TestJSONOutput/pause/Audit 0
170 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/unpause/Command 0.59
174 TestJSONOutput/unpause/Audit 0
176 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/stop/Command 7.35
180 TestJSONOutput/stop/Audit 0
182 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
184 TestErrorJSONOutput 0.18
189 TestMainNoArgs 0.04
190 TestMinikubeProfile 89.44
193 TestMountStart/serial/StartWithMountFirst 26.34
194 TestMountStart/serial/VerifyMountFirst 0.36
195 TestMountStart/serial/StartWithMountSecond 26.39
196 TestMountStart/serial/VerifyMountSecond 0.35
197 TestMountStart/serial/DeleteFirst 0.66
198 TestMountStart/serial/VerifyMountPostDelete 0.35
199 TestMountStart/serial/Stop 1.27
200 TestMountStart/serial/RestartStopped 22.25
201 TestMountStart/serial/VerifyMountPostStop 0.35
204 TestMultiNode/serial/FreshStart2Nodes 107.64
205 TestMultiNode/serial/DeployApp2Nodes 4.97
206 TestMultiNode/serial/PingHostFrom2Pods 0.78
207 TestMultiNode/serial/AddNode 53.56
208 TestMultiNode/serial/MultiNodeLabels 0.06
209 TestMultiNode/serial/ProfileList 0.21
210 TestMultiNode/serial/CopyFile 7.04
211 TestMultiNode/serial/StopNode 2.29
212 TestMultiNode/serial/StartAfterStop 37.64
214 TestMultiNode/serial/DeleteNode 2.29
216 TestMultiNode/serial/RestartMultiNode 206.08
217 TestMultiNode/serial/ValidateNameConflict 46.05
224 TestScheduledStopUnix 115.69
228 TestRunningBinaryUpgrade 184.02
232 TestStoppedBinaryUpgrade/Setup 0.59
233 TestStoppedBinaryUpgrade/Upgrade 164.92
241 TestStoppedBinaryUpgrade/MinikubeLogs 0.98
243 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
244 TestNoKubernetes/serial/StartWithK8s 74.54
246 TestPause/serial/Start 57.02
247 TestNoKubernetes/serial/StartWithStopK8s 18.08
248 TestNoKubernetes/serial/Start 41.66
249 TestPause/serial/SecondStartNoReconfiguration 63.1
250 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
251 TestNoKubernetes/serial/ProfileList 1.24
252 TestNoKubernetes/serial/Stop 1.28
253 TestNoKubernetes/serial/StartNoArgs 31.34
254 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
258 TestPause/serial/Pause 0.86
264 TestPause/serial/VerifyStatus 0.28
265 TestPause/serial/Unpause 0.74
266 TestPause/serial/PauseAgain 0.93
267 TestPause/serial/DeletePaused 1.08
268 TestPause/serial/VerifyDeletedResources 0.53
x
+
TestDownloadOnly/v1.20.0/json-events (12.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-607359 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-607359 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.979250903s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (12.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-607359
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-607359: exit status 85 (54.65964ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-607359 | jenkins | v1.34.0 | 16 Sep 24 12:52 UTC |          |
	|         | -p download-only-607359        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 12:52:15
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 12:52:15.717834  720556 out.go:345] Setting OutFile to fd 1 ...
	I0916 12:52:15.717932  720556 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 12:52:15.717940  720556 out.go:358] Setting ErrFile to fd 2...
	I0916 12:52:15.717944  720556 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 12:52:15.718130  720556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
	W0916 12:52:15.718247  720556 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19652-713072/.minikube/config/config.json: open /home/jenkins/minikube-integration/19652-713072/.minikube/config/config.json: no such file or directory
	I0916 12:52:15.718775  720556 out.go:352] Setting JSON to true
	I0916 12:52:15.719709  720556 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":9285,"bootTime":1726481851,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 12:52:15.719801  720556 start.go:139] virtualization: kvm guest
	I0916 12:52:15.722014  720556 out.go:97] [download-only-607359] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0916 12:52:15.722119  720556 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 12:52:15.722156  720556 notify.go:220] Checking for updates...
	I0916 12:52:15.723308  720556 out.go:169] MINIKUBE_LOCATION=19652
	I0916 12:52:15.724479  720556 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 12:52:15.725645  720556 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19652-713072/kubeconfig
	I0916 12:52:15.726747  720556 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 12:52:15.727824  720556 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0916 12:52:15.729643  720556 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 12:52:15.729879  720556 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 12:52:15.763384  720556 out.go:97] Using the kvm2 driver based on user configuration
	I0916 12:52:15.763410  720556 start.go:297] selected driver: kvm2
	I0916 12:52:15.763418  720556 start.go:901] validating driver "kvm2" against <nil>
	I0916 12:52:15.763722  720556 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 12:52:15.763807  720556 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19652-713072/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 12:52:15.778415  720556 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 12:52:15.778460  720556 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 12:52:15.778957  720556 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0916 12:52:15.779131  720556 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 12:52:15.779165  720556 cni.go:84] Creating CNI manager for ""
	I0916 12:52:15.779219  720556 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 12:52:15.779228  720556 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 12:52:15.779294  720556 start.go:340] cluster config:
	{Name:download-only-607359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-607359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 12:52:15.779504  720556 iso.go:125] acquiring lock: {Name:mk66d96ffbd424a8ca76a8604dfbe200d58305de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 12:52:15.780913  720556 out.go:97] Downloading VM boot image ...
	I0916 12:52:15.780955  720556 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19652-713072/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 12:52:23.376894  720556 out.go:97] Starting "download-only-607359" primary control-plane node in "download-only-607359" cluster
	I0916 12:52:23.376916  720556 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 12:52:23.395874  720556 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0916 12:52:23.395900  720556 cache.go:56] Caching tarball of preloaded images
	I0916 12:52:23.396063  720556 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 12:52:23.397461  720556 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0916 12:52:23.397485  720556 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0916 12:52:23.422915  720556 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-607359 host does not exist
	  To start a cluster, run: "minikube start -p download-only-607359"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-607359
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-569502 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-569502 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.527916138s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-569502
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-569502: exit status 85 (56.180458ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-607359 | jenkins | v1.34.0 | 16 Sep 24 12:52 UTC |                     |
	|         | -p download-only-607359        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 16 Sep 24 12:52 UTC | 16 Sep 24 12:52 UTC |
	| delete  | -p download-only-607359        | download-only-607359 | jenkins | v1.34.0 | 16 Sep 24 12:52 UTC | 16 Sep 24 12:52 UTC |
	| start   | -o=json --download-only        | download-only-569502 | jenkins | v1.34.0 | 16 Sep 24 12:52 UTC |                     |
	|         | -p download-only-569502        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 12:52:29
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 12:52:29.001336  720762 out.go:345] Setting OutFile to fd 1 ...
	I0916 12:52:29.001428  720762 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 12:52:29.001436  720762 out.go:358] Setting ErrFile to fd 2...
	I0916 12:52:29.001440  720762 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 12:52:29.001630  720762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
	I0916 12:52:29.002180  720762 out.go:352] Setting JSON to true
	I0916 12:52:29.003112  720762 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":9298,"bootTime":1726481851,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 12:52:29.003216  720762 start.go:139] virtualization: kvm guest
	I0916 12:52:29.005093  720762 out.go:97] [download-only-569502] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 12:52:29.005265  720762 notify.go:220] Checking for updates...
	I0916 12:52:29.006343  720762 out.go:169] MINIKUBE_LOCATION=19652
	I0916 12:52:29.007466  720762 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 12:52:29.008515  720762 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19652-713072/kubeconfig
	I0916 12:52:29.009449  720762 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 12:52:29.010655  720762 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0916 12:52:29.012563  720762 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 12:52:29.012798  720762 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 12:52:29.044459  720762 out.go:97] Using the kvm2 driver based on user configuration
	I0916 12:52:29.044488  720762 start.go:297] selected driver: kvm2
	I0916 12:52:29.044495  720762 start.go:901] validating driver "kvm2" against <nil>
	I0916 12:52:29.044813  720762 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 12:52:29.044917  720762 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19652-713072/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 12:52:29.059890  720762 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 12:52:29.059953  720762 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 12:52:29.060491  720762 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0916 12:52:29.060652  720762 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 12:52:29.060682  720762 cni.go:84] Creating CNI manager for ""
	I0916 12:52:29.060725  720762 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 12:52:29.060733  720762 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 12:52:29.060788  720762 start.go:340] cluster config:
	{Name:download-only-569502 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-569502 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 12:52:29.060878  720762 iso.go:125] acquiring lock: {Name:mk66d96ffbd424a8ca76a8604dfbe200d58305de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 12:52:29.062318  720762 out.go:97] Starting "download-only-569502" primary control-plane node in "download-only-569502" cluster
	I0916 12:52:29.062338  720762 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 12:52:29.084341  720762 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 12:52:29.084375  720762 cache.go:56] Caching tarball of preloaded images
	I0916 12:52:29.084561  720762 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 12:52:29.086122  720762 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0916 12:52:29.086141  720762 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0916 12:52:29.111115  720762 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19652-713072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-569502 host does not exist
	  To start a cluster, run: "minikube start -p download-only-569502"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-569502
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-723259 --alsologtostderr --binary-mirror http://127.0.0.1:39289 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-723259" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-723259
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (79.74s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-613872 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-613872 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m18.66588796s)
helpers_test.go:175: Cleaning up "offline-crio-613872" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-613872
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-613872: (1.073696047s)
--- PASS: TestOffline (79.74s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-682228
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-682228: exit status 85 (48.746912ms)

                                                
                                                
-- stdout --
	* Profile "addons-682228" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-682228"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-682228
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-682228: exit status 85 (49.69069ms)

                                                
                                                
-- stdout --
	* Profile "addons-682228" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-682228"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestCertOptions (56.27s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-034867 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0916 14:30:33.279749  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-034867 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (54.8727823s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-034867 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-034867 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-034867 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-034867" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-034867
--- PASS: TestCertOptions (56.27s)

                                                
                                    
x
+
TestCertExpiration (308.26s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-500026 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-500026 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (43.849425336s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-500026 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-500026 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m23.402519098s)
helpers_test.go:175: Cleaning up "cert-expiration-500026" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-500026
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-500026: (1.002992659s)
--- PASS: TestCertExpiration (308.26s)

                                                
                                    
x
+
TestForceSystemdFlag (49.87s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-243383 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-243383 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (48.694178492s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-243383 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-243383" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-243383
--- PASS: TestForceSystemdFlag (49.87s)

                                                
                                    
x
+
TestForceSystemdEnv (60.37s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-063895 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-063895 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (59.622268176s)
helpers_test.go:175: Cleaning up "force-systemd-env-063895" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-063895
--- PASS: TestForceSystemdEnv (60.37s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.17s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.17s)

                                                
                                    
x
+
TestErrorSpam/setup (43.3s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-944249 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-944249 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-944249 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-944249 --driver=kvm2  --container-runtime=crio: (43.299057994s)
--- PASS: TestErrorSpam/setup (43.30s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-944249 --log_dir /tmp/nospam-944249 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-944249 --log_dir /tmp/nospam-944249 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-944249 --log_dir /tmp/nospam-944249 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-944249 --log_dir /tmp/nospam-944249 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-944249 --log_dir /tmp/nospam-944249 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-944249 --log_dir /tmp/nospam-944249 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-944249 --log_dir /tmp/nospam-944249 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-944249 --log_dir /tmp/nospam-944249 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-944249 --log_dir /tmp/nospam-944249 pause
--- PASS: TestErrorSpam/pause (1.56s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-944249 --log_dir /tmp/nospam-944249 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-944249 --log_dir /tmp/nospam-944249 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-944249 --log_dir /tmp/nospam-944249 unpause
--- PASS: TestErrorSpam/unpause (1.68s)

                                                
                                    
x
+
TestErrorSpam/stop (4.04s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-944249 --log_dir /tmp/nospam-944249 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-944249 --log_dir /tmp/nospam-944249 stop: (1.644170146s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-944249 --log_dir /tmp/nospam-944249 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-944249 --log_dir /tmp/nospam-944249 stop: (1.049529495s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-944249 --log_dir /tmp/nospam-944249 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-944249 --log_dir /tmp/nospam-944249 stop: (1.344713469s)
--- PASS: TestErrorSpam/stop (4.04s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19652-713072/.minikube/files/etc/test/nested/copy/720544/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (53.02s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-983900 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-983900 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (53.015509235s)
--- PASS: TestFunctional/serial/StartWithProxy (53.02s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.65s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-983900 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-983900 --alsologtostderr -v=8: (40.653014448s)
functional_test.go:663: soft start took 40.653968531s for "functional-983900" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.65s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-983900 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-983900 cache add registry.k8s.io/pause:3.1: (1.09666828s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-983900 cache add registry.k8s.io/pause:3.3: (1.179380132s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-983900 cache add registry.k8s.io/pause:latest: (1.134236733s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-983900 /tmp/TestFunctionalserialCacheCmdcacheadd_local2239974106/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 cache add minikube-local-cache-test:functional-983900
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 cache delete minikube-local-cache-test:functional-983900
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-983900
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-983900 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (207.131982ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 kubectl -- --context functional-983900 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-983900 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.32s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-983900 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-983900 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.316877668s)
functional_test.go:761: restart took 32.31701924s for "functional-983900" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.32s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-983900 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-983900 logs: (1.33587304s)
--- PASS: TestFunctional/serial/LogsCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 logs --file /tmp/TestFunctionalserialLogsFileCmd3415503443/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-983900 logs --file /tmp/TestFunctionalserialLogsFileCmd3415503443/001/logs.txt: (1.312074278s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.5s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-983900 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-983900
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-983900: exit status 115 (263.556607ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.221:30994 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-983900 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-983900 delete -f testdata/invalidsvc.yaml: (1.037196309s)
--- PASS: TestFunctional/serial/InvalidService (4.50s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-983900 config get cpus: exit status 14 (61.27845ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-983900 config get cpus: exit status 14 (52.222278ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (25.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-983900 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-983900 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 733199: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (25.70s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-983900 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-983900 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (137.603583ms)

                                                
                                                
-- stdout --
	* [functional-983900] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19652
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19652-713072/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19652-713072/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 13:35:52.718926  732967 out.go:345] Setting OutFile to fd 1 ...
	I0916 13:35:52.719171  732967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:35:52.719181  732967 out.go:358] Setting ErrFile to fd 2...
	I0916 13:35:52.719186  732967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:35:52.719403  732967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
	I0916 13:35:52.720111  732967 out.go:352] Setting JSON to false
	I0916 13:35:52.721287  732967 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":11902,"bootTime":1726481851,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 13:35:52.721389  732967 start.go:139] virtualization: kvm guest
	I0916 13:35:52.723092  732967 out.go:177] * [functional-983900] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 13:35:52.724318  732967 out.go:177]   - MINIKUBE_LOCATION=19652
	I0916 13:35:52.724318  732967 notify.go:220] Checking for updates...
	I0916 13:35:52.725551  732967 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 13:35:52.726684  732967 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19652-713072/kubeconfig
	I0916 13:35:52.727689  732967 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 13:35:52.728748  732967 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 13:35:52.729779  732967 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 13:35:52.731258  732967 config.go:182] Loaded profile config "functional-983900": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:35:52.731849  732967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:35:52.731897  732967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:35:52.748178  732967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40999
	I0916 13:35:52.748690  732967 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:35:52.749264  732967 main.go:141] libmachine: Using API Version  1
	I0916 13:35:52.749286  732967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:35:52.749733  732967 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:35:52.749917  732967 main.go:141] libmachine: (functional-983900) Calling .DriverName
	I0916 13:35:52.750153  732967 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 13:35:52.750480  732967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:35:52.750528  732967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:35:52.766318  732967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42219
	I0916 13:35:52.766878  732967 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:35:52.767437  732967 main.go:141] libmachine: Using API Version  1
	I0916 13:35:52.767468  732967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:35:52.767800  732967 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:35:52.768026  732967 main.go:141] libmachine: (functional-983900) Calling .DriverName
	I0916 13:35:52.807011  732967 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 13:35:52.808043  732967 start.go:297] selected driver: kvm2
	I0916 13:35:52.808061  732967 start.go:901] validating driver "kvm2" against &{Name:functional-983900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-983900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.221 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 13:35:52.808204  732967 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 13:35:52.810223  732967 out.go:201] 
	W0916 13:35:52.811410  732967 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0916 13:35:52.812525  732967 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-983900 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-983900 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-983900 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (156.960496ms)

                                                
                                                
-- stdout --
	* [functional-983900] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19652
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19652-713072/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19652-713072/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 13:35:51.529029  732685 out.go:345] Setting OutFile to fd 1 ...
	I0916 13:35:51.529207  732685 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:35:51.529222  732685 out.go:358] Setting ErrFile to fd 2...
	I0916 13:35:51.529230  732685 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 13:35:51.529691  732685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
	I0916 13:35:51.530472  732685 out.go:352] Setting JSON to false
	I0916 13:35:51.531973  732685 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":11900,"bootTime":1726481851,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 13:35:51.532110  732685 start.go:139] virtualization: kvm guest
	I0916 13:35:51.534282  732685 out.go:177] * [functional-983900] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0916 13:35:51.535750  732685 notify.go:220] Checking for updates...
	I0916 13:35:51.535790  732685 out.go:177]   - MINIKUBE_LOCATION=19652
	I0916 13:35:51.537006  732685 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 13:35:51.538068  732685 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19652-713072/kubeconfig
	I0916 13:35:51.539231  732685 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19652-713072/.minikube
	I0916 13:35:51.540268  732685 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 13:35:51.541464  732685 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 13:35:51.543103  732685 config.go:182] Loaded profile config "functional-983900": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 13:35:51.543726  732685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:35:51.543812  732685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:35:51.561348  732685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37465
	I0916 13:35:51.561907  732685 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:35:51.562561  732685 main.go:141] libmachine: Using API Version  1
	I0916 13:35:51.562592  732685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:35:51.562940  732685 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:35:51.563179  732685 main.go:141] libmachine: (functional-983900) Calling .DriverName
	I0916 13:35:51.563451  732685 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 13:35:51.563842  732685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 13:35:51.563884  732685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 13:35:51.582136  732685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33637
	I0916 13:35:51.582669  732685 main.go:141] libmachine: () Calling .GetVersion
	I0916 13:35:51.583158  732685 main.go:141] libmachine: Using API Version  1
	I0916 13:35:51.583178  732685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 13:35:51.583551  732685 main.go:141] libmachine: () Calling .GetMachineName
	I0916 13:35:51.583736  732685 main.go:141] libmachine: (functional-983900) Calling .DriverName
	I0916 13:35:51.623744  732685 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0916 13:35:51.625080  732685 start.go:297] selected driver: kvm2
	I0916 13:35:51.625094  732685 start.go:901] validating driver "kvm2" against &{Name:functional-983900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-983900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.221 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 13:35:51.625243  732685 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 13:35:51.627108  732685 out.go:201] 
	W0916 13:35:51.628220  732685 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0916 13:35:51.629215  732685 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-983900 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-983900 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-swfzb" [dd49fe4d-b2c0-41c1-ae46-6c9dc785577a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-swfzb" [dd49fe4d-b2c0-41c1-ae46-6c9dc785577a] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003123984s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.221:30295
functional_test.go:1675: http://192.168.39.221:30295: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-swfzb

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.221:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.221:30295
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.93s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (41.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [022d9a5d-4961-469e-a881-7a184dc405bd] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004908012s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-983900 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-983900 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-983900 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-983900 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-983900 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [abd148f7-22cd-47aa-94b9-40925f7b97ac] Pending
helpers_test.go:344: "sp-pod" [abd148f7-22cd-47aa-94b9-40925f7b97ac] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [abd148f7-22cd-47aa-94b9-40925f7b97ac] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004058015s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-983900 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-983900 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-983900 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0d6f260c-0cc2-4703-9627-782e93b3e09f] Pending
helpers_test.go:344: "sp-pod" [0d6f260c-0cc2-4703-9627-782e93b3e09f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0d6f260c-0cc2-4703-9627-782e93b3e09f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.003804691s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-983900 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (41.52s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh -n functional-983900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 cp functional-983900:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3981986653/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh -n functional-983900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh -n functional-983900 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-983900 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-t6s8l" [f2e147fc-b761-4b04-8f0e-d597cce78f5d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-t6s8l" [f2e147fc-b761-4b04-8f0e-d597cce78f5d] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.004827113s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-983900 exec mysql-6cdb49bbb-t6s8l -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-983900 exec mysql-6cdb49bbb-t6s8l -- mysql -ppassword -e "show databases;": exit status 1 (188.219826ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-983900 exec mysql-6cdb49bbb-t6s8l -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-983900 exec mysql-6cdb49bbb-t6s8l -- mysql -ppassword -e "show databases;": exit status 1 (204.384071ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-983900 exec mysql-6cdb49bbb-t6s8l -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.35s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/720544/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh "sudo cat /etc/test/nested/copy/720544/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/720544.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh "sudo cat /etc/ssl/certs/720544.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/720544.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh "sudo cat /usr/share/ca-certificates/720544.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/7205442.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh "sudo cat /etc/ssl/certs/7205442.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/7205442.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh "sudo cat /usr/share/ca-certificates/7205442.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-983900 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-983900 ssh "sudo systemctl is-active docker": exit status 1 (270.900455ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-983900 ssh "sudo systemctl is-active containerd": exit status 1 (254.582632ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-983900 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-983900 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-bmskl" [1fbc017e-52cf-4fe2-80f7-8e689d6ed684] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-bmskl" [1fbc017e-52cf-4fe2-80f7-8e689d6ed684] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003570495s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (20.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-983900 /tmp/TestFunctionalparallelMountCmdany-port1194623126/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726493751634568110" to /tmp/TestFunctionalparallelMountCmdany-port1194623126/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726493751634568110" to /tmp/TestFunctionalparallelMountCmdany-port1194623126/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726493751634568110" to /tmp/TestFunctionalparallelMountCmdany-port1194623126/001/test-1726493751634568110
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-983900 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (242.806504ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 16 13:35 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 16 13:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 16 13:35 test-1726493751634568110
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh cat /mount-9p/test-1726493751634568110
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-983900 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [28a4e993-4b27-465e-80d9-f55bc43b55d4] Pending
helpers_test.go:344: "busybox-mount" [28a4e993-4b27-465e-80d9-f55bc43b55d4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [28a4e993-4b27-465e-80d9-f55bc43b55d4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [28a4e993-4b27-465e-80d9-f55bc43b55d4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 18.007973478s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-983900 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-983900 /tmp/TestFunctionalparallelMountCmdany-port1194623126/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (20.77s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "291.061136ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "50.37467ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "320.418802ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "49.522173ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 service list -o json
functional_test.go:1494: Took "869.665653ms" to run "out/minikube-linux-amd64 -p functional-983900 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.221:31927
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.221:31927
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-983900 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-983900
localhost/kicbase/echo-server:functional-983900
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-983900 image ls --format short --alsologtostderr:
I0916 13:36:17.568699  734559 out.go:345] Setting OutFile to fd 1 ...
I0916 13:36:17.568800  734559 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 13:36:17.568805  734559 out.go:358] Setting ErrFile to fd 2...
I0916 13:36:17.568810  734559 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 13:36:17.569004  734559 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
I0916 13:36:17.569607  734559 config.go:182] Loaded profile config "functional-983900": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 13:36:17.569733  734559 config.go:182] Loaded profile config "functional-983900": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 13:36:17.570109  734559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 13:36:17.570148  734559 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 13:36:17.587127  734559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
I0916 13:36:17.587659  734559 main.go:141] libmachine: () Calling .GetVersion
I0916 13:36:17.588243  734559 main.go:141] libmachine: Using API Version  1
I0916 13:36:17.588269  734559 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 13:36:17.588643  734559 main.go:141] libmachine: () Calling .GetMachineName
I0916 13:36:17.588883  734559 main.go:141] libmachine: (functional-983900) Calling .GetState
I0916 13:36:17.590908  734559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 13:36:17.590952  734559 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 13:36:17.606513  734559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40873
I0916 13:36:17.607024  734559 main.go:141] libmachine: () Calling .GetVersion
I0916 13:36:17.607550  734559 main.go:141] libmachine: Using API Version  1
I0916 13:36:17.607576  734559 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 13:36:17.607898  734559 main.go:141] libmachine: () Calling .GetMachineName
I0916 13:36:17.608071  734559 main.go:141] libmachine: (functional-983900) Calling .DriverName
I0916 13:36:17.608277  734559 ssh_runner.go:195] Run: systemctl --version
I0916 13:36:17.608317  734559 main.go:141] libmachine: (functional-983900) Calling .GetSSHHostname
I0916 13:36:17.610835  734559 main.go:141] libmachine: (functional-983900) DBG | domain functional-983900 has defined MAC address 52:54:00:f5:24:4c in network mk-functional-983900
I0916 13:36:17.611233  734559 main.go:141] libmachine: (functional-983900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:24:4c", ip: ""} in network mk-functional-983900: {Iface:virbr1 ExpiryTime:2024-09-16 14:33:45 +0000 UTC Type:0 Mac:52:54:00:f5:24:4c Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:functional-983900 Clientid:01:52:54:00:f5:24:4c}
I0916 13:36:17.611268  734559 main.go:141] libmachine: (functional-983900) DBG | domain functional-983900 has defined IP address 192.168.39.221 and MAC address 52:54:00:f5:24:4c in network mk-functional-983900
I0916 13:36:17.611436  734559 main.go:141] libmachine: (functional-983900) Calling .GetSSHPort
I0916 13:36:17.611614  734559 main.go:141] libmachine: (functional-983900) Calling .GetSSHKeyPath
I0916 13:36:17.611772  734559 main.go:141] libmachine: (functional-983900) Calling .GetSSHUsername
I0916 13:36:17.611893  734559 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/functional-983900/id_rsa Username:docker}
I0916 13:36:17.688181  734559 ssh_runner.go:195] Run: sudo crictl images --output json
I0916 13:36:17.739449  734559 main.go:141] libmachine: Making call to close driver server
I0916 13:36:17.739462  734559 main.go:141] libmachine: (functional-983900) Calling .Close
I0916 13:36:17.739782  734559 main.go:141] libmachine: Successfully made call to close driver server
I0916 13:36:17.739803  734559 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 13:36:17.739822  734559 main.go:141] libmachine: Making call to close driver server
I0916 13:36:17.739822  734559 main.go:141] libmachine: (functional-983900) DBG | Closing plugin on server side
I0916 13:36:17.739830  734559 main.go:141] libmachine: (functional-983900) Calling .Close
I0916 13:36:17.740040  734559 main.go:141] libmachine: Successfully made call to close driver server
I0916 13:36:17.740055  734559 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 13:36:17.740067  734559 main.go:141] libmachine: (functional-983900) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-983900 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-983900  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| localhost/minikube-local-cache-test     | functional-983900  | 16493f2a069a3 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-983900 image ls --format table --alsologtostderr:
I0916 13:36:19.823751  734704 out.go:345] Setting OutFile to fd 1 ...
I0916 13:36:19.824054  734704 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 13:36:19.824068  734704 out.go:358] Setting ErrFile to fd 2...
I0916 13:36:19.824075  734704 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 13:36:19.824354  734704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
I0916 13:36:19.825258  734704 config.go:182] Loaded profile config "functional-983900": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 13:36:19.825444  734704 config.go:182] Loaded profile config "functional-983900": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 13:36:19.826001  734704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 13:36:19.826053  734704 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 13:36:19.841265  734704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37263
I0916 13:36:19.841865  734704 main.go:141] libmachine: () Calling .GetVersion
I0916 13:36:19.842428  734704 main.go:141] libmachine: Using API Version  1
I0916 13:36:19.842448  734704 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 13:36:19.842805  734704 main.go:141] libmachine: () Calling .GetMachineName
I0916 13:36:19.843012  734704 main.go:141] libmachine: (functional-983900) Calling .GetState
I0916 13:36:19.844926  734704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 13:36:19.844965  734704 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 13:36:19.859851  734704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45735
I0916 13:36:19.860250  734704 main.go:141] libmachine: () Calling .GetVersion
I0916 13:36:19.860778  734704 main.go:141] libmachine: Using API Version  1
I0916 13:36:19.860809  734704 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 13:36:19.861165  734704 main.go:141] libmachine: () Calling .GetMachineName
I0916 13:36:19.861374  734704 main.go:141] libmachine: (functional-983900) Calling .DriverName
I0916 13:36:19.861607  734704 ssh_runner.go:195] Run: systemctl --version
I0916 13:36:19.861639  734704 main.go:141] libmachine: (functional-983900) Calling .GetSSHHostname
I0916 13:36:19.864647  734704 main.go:141] libmachine: (functional-983900) DBG | domain functional-983900 has defined MAC address 52:54:00:f5:24:4c in network mk-functional-983900
I0916 13:36:19.865104  734704 main.go:141] libmachine: (functional-983900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:24:4c", ip: ""} in network mk-functional-983900: {Iface:virbr1 ExpiryTime:2024-09-16 14:33:45 +0000 UTC Type:0 Mac:52:54:00:f5:24:4c Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:functional-983900 Clientid:01:52:54:00:f5:24:4c}
I0916 13:36:19.865132  734704 main.go:141] libmachine: (functional-983900) DBG | domain functional-983900 has defined IP address 192.168.39.221 and MAC address 52:54:00:f5:24:4c in network mk-functional-983900
I0916 13:36:19.865239  734704 main.go:141] libmachine: (functional-983900) Calling .GetSSHPort
I0916 13:36:19.865395  734704 main.go:141] libmachine: (functional-983900) Calling .GetSSHKeyPath
I0916 13:36:19.865589  734704 main.go:141] libmachine: (functional-983900) Calling .GetSSHUsername
I0916 13:36:19.865765  734704 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/functional-983900/id_rsa Username:docker}
I0916 13:36:19.955681  734704 ssh_runner.go:195] Run: sudo crictl images --output json
I0916 13:36:20.025500  734704 main.go:141] libmachine: Making call to close driver server
I0916 13:36:20.025527  734704 main.go:141] libmachine: (functional-983900) Calling .Close
I0916 13:36:20.025884  734704 main.go:141] libmachine: Successfully made call to close driver server
I0916 13:36:20.025905  734704 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 13:36:20.025915  734704 main.go:141] libmachine: Making call to close driver server
I0916 13:36:20.025924  734704 main.go:141] libmachine: (functional-983900) Calling .Close
I0916 13:36:20.027307  734704 main.go:141] libmachine: (functional-983900) DBG | Closing plugin on server side
I0916 13:36:20.027384  734704 main.go:141] libmachine: Successfully made call to close driver server
I0916 13:36:20.027417  734704 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-983900 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629af
b18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa1
5d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pa
use@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33
c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"16493f2a069a382a0bb888890910ab43d64044792b5ac8078df32faa5d0cda42","repoDigests":["localhost/minikube-local-cache-test@sha256:adbdbed359baf5d06924731c91b4e40b7d38d4a8004db083969d85945e108d22"],"repoTags":["localhost/minikube-local-cache-test:functional-983900"],"size":"3328"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694b
f97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scrap
er@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-983900"],"size":"4943877"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-983900 image ls --format json --alsologtostderr:
I0916 13:36:19.591455  734680 out.go:345] Setting OutFile to fd 1 ...
I0916 13:36:19.591561  734680 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 13:36:19.591569  734680 out.go:358] Setting ErrFile to fd 2...
I0916 13:36:19.591573  734680 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 13:36:19.591766  734680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
I0916 13:36:19.592376  734680 config.go:182] Loaded profile config "functional-983900": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 13:36:19.592471  734680 config.go:182] Loaded profile config "functional-983900": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 13:36:19.592834  734680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 13:36:19.592871  734680 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 13:36:19.608298  734680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34973
I0916 13:36:19.608841  734680 main.go:141] libmachine: () Calling .GetVersion
I0916 13:36:19.609419  734680 main.go:141] libmachine: Using API Version  1
I0916 13:36:19.609439  734680 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 13:36:19.609854  734680 main.go:141] libmachine: () Calling .GetMachineName
I0916 13:36:19.610129  734680 main.go:141] libmachine: (functional-983900) Calling .GetState
I0916 13:36:19.611991  734680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 13:36:19.612029  734680 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 13:36:19.626952  734680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45231
I0916 13:36:19.627475  734680 main.go:141] libmachine: () Calling .GetVersion
I0916 13:36:19.628042  734680 main.go:141] libmachine: Using API Version  1
I0916 13:36:19.628093  734680 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 13:36:19.628425  734680 main.go:141] libmachine: () Calling .GetMachineName
I0916 13:36:19.628640  734680 main.go:141] libmachine: (functional-983900) Calling .DriverName
I0916 13:36:19.628877  734680 ssh_runner.go:195] Run: systemctl --version
I0916 13:36:19.628912  734680 main.go:141] libmachine: (functional-983900) Calling .GetSSHHostname
I0916 13:36:19.631632  734680 main.go:141] libmachine: (functional-983900) DBG | domain functional-983900 has defined MAC address 52:54:00:f5:24:4c in network mk-functional-983900
I0916 13:36:19.631961  734680 main.go:141] libmachine: (functional-983900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:24:4c", ip: ""} in network mk-functional-983900: {Iface:virbr1 ExpiryTime:2024-09-16 14:33:45 +0000 UTC Type:0 Mac:52:54:00:f5:24:4c Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:functional-983900 Clientid:01:52:54:00:f5:24:4c}
I0916 13:36:19.632000  734680 main.go:141] libmachine: (functional-983900) DBG | domain functional-983900 has defined IP address 192.168.39.221 and MAC address 52:54:00:f5:24:4c in network mk-functional-983900
I0916 13:36:19.632180  734680 main.go:141] libmachine: (functional-983900) Calling .GetSSHPort
I0916 13:36:19.632366  734680 main.go:141] libmachine: (functional-983900) Calling .GetSSHKeyPath
I0916 13:36:19.632512  734680 main.go:141] libmachine: (functional-983900) Calling .GetSSHUsername
I0916 13:36:19.632664  734680 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/functional-983900/id_rsa Username:docker}
I0916 13:36:19.716870  734680 ssh_runner.go:195] Run: sudo crictl images --output json
I0916 13:36:19.771662  734680 main.go:141] libmachine: Making call to close driver server
I0916 13:36:19.771674  734680 main.go:141] libmachine: (functional-983900) Calling .Close
I0916 13:36:19.772003  734680 main.go:141] libmachine: Successfully made call to close driver server
I0916 13:36:19.772021  734680 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 13:36:19.772037  734680 main.go:141] libmachine: Making call to close driver server
I0916 13:36:19.772044  734680 main.go:141] libmachine: (functional-983900) Calling .Close
I0916 13:36:19.772287  734680 main.go:141] libmachine: Successfully made call to close driver server
I0916 13:36:19.772305  734680 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-983900 image ls --format yaml --alsologtostderr:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 16493f2a069a382a0bb888890910ab43d64044792b5ac8078df32faa5d0cda42
repoDigests:
- localhost/minikube-local-cache-test@sha256:adbdbed359baf5d06924731c91b4e40b7d38d4a8004db083969d85945e108d22
repoTags:
- localhost/minikube-local-cache-test:functional-983900
size: "3328"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-983900
size: "4943877"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-983900 image ls --format yaml --alsologtostderr:
I0916 13:36:17.786543  734583 out.go:345] Setting OutFile to fd 1 ...
I0916 13:36:17.786641  734583 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 13:36:17.786649  734583 out.go:358] Setting ErrFile to fd 2...
I0916 13:36:17.786653  734583 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 13:36:17.786832  734583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
I0916 13:36:17.787401  734583 config.go:182] Loaded profile config "functional-983900": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 13:36:17.787498  734583 config.go:182] Loaded profile config "functional-983900": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 13:36:17.787836  734583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 13:36:17.787878  734583 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 13:36:17.803237  734583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46823
I0916 13:36:17.803709  734583 main.go:141] libmachine: () Calling .GetVersion
I0916 13:36:17.804272  734583 main.go:141] libmachine: Using API Version  1
I0916 13:36:17.804309  734583 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 13:36:17.804651  734583 main.go:141] libmachine: () Calling .GetMachineName
I0916 13:36:17.804852  734583 main.go:141] libmachine: (functional-983900) Calling .GetState
I0916 13:36:17.806508  734583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 13:36:17.806544  734583 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 13:36:17.821625  734583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43291
I0916 13:36:17.822120  734583 main.go:141] libmachine: () Calling .GetVersion
I0916 13:36:17.822670  734583 main.go:141] libmachine: Using API Version  1
I0916 13:36:17.822698  734583 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 13:36:17.823013  734583 main.go:141] libmachine: () Calling .GetMachineName
I0916 13:36:17.823189  734583 main.go:141] libmachine: (functional-983900) Calling .DriverName
I0916 13:36:17.823416  734583 ssh_runner.go:195] Run: systemctl --version
I0916 13:36:17.823456  734583 main.go:141] libmachine: (functional-983900) Calling .GetSSHHostname
I0916 13:36:17.826106  734583 main.go:141] libmachine: (functional-983900) DBG | domain functional-983900 has defined MAC address 52:54:00:f5:24:4c in network mk-functional-983900
I0916 13:36:17.826527  734583 main.go:141] libmachine: (functional-983900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:24:4c", ip: ""} in network mk-functional-983900: {Iface:virbr1 ExpiryTime:2024-09-16 14:33:45 +0000 UTC Type:0 Mac:52:54:00:f5:24:4c Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:functional-983900 Clientid:01:52:54:00:f5:24:4c}
I0916 13:36:17.826566  734583 main.go:141] libmachine: (functional-983900) DBG | domain functional-983900 has defined IP address 192.168.39.221 and MAC address 52:54:00:f5:24:4c in network mk-functional-983900
I0916 13:36:17.826629  734583 main.go:141] libmachine: (functional-983900) Calling .GetSSHPort
I0916 13:36:17.826821  734583 main.go:141] libmachine: (functional-983900) Calling .GetSSHKeyPath
I0916 13:36:17.826965  734583 main.go:141] libmachine: (functional-983900) Calling .GetSSHUsername
I0916 13:36:17.827093  734583 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/functional-983900/id_rsa Username:docker}
I0916 13:36:17.909215  734583 ssh_runner.go:195] Run: sudo crictl images --output json
I0916 13:36:17.951310  734583 main.go:141] libmachine: Making call to close driver server
I0916 13:36:17.951323  734583 main.go:141] libmachine: (functional-983900) Calling .Close
I0916 13:36:17.951688  734583 main.go:141] libmachine: Successfully made call to close driver server
I0916 13:36:17.951711  734583 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 13:36:17.951727  734583 main.go:141] libmachine: (functional-983900) DBG | Closing plugin on server side
I0916 13:36:17.951735  734583 main.go:141] libmachine: Making call to close driver server
I0916 13:36:17.951747  734583 main.go:141] libmachine: (functional-983900) Calling .Close
I0916 13:36:17.951970  734583 main.go:141] libmachine: (functional-983900) DBG | Closing plugin on server side
I0916 13:36:17.952011  734583 main.go:141] libmachine: Successfully made call to close driver server
I0916 13:36:17.952028  734583 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-983900 ssh pgrep buildkitd: exit status 1 (184.923524ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 image build -t localhost/my-image:functional-983900 testdata/build --alsologtostderr
2024/09/16 13:36:19 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-983900 image build -t localhost/my-image:functional-983900 testdata/build --alsologtostderr: (2.520001492s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-983900 image build -t localhost/my-image:functional-983900 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 165f04b6a3d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-983900
--> 6f6fed7f296
Successfully tagged localhost/my-image:functional-983900
6f6fed7f296727c9e9a22e2c2efc145d5a17010d10f5f42cbf12b87236b56fc2
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-983900 image build -t localhost/my-image:functional-983900 testdata/build --alsologtostderr:
I0916 13:36:18.183961  734640 out.go:345] Setting OutFile to fd 1 ...
I0916 13:36:18.184101  734640 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 13:36:18.184114  734640 out.go:358] Setting ErrFile to fd 2...
I0916 13:36:18.184121  734640 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 13:36:18.184294  734640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
I0916 13:36:18.184907  734640 config.go:182] Loaded profile config "functional-983900": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 13:36:18.185475  734640 config.go:182] Loaded profile config "functional-983900": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 13:36:18.185876  734640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 13:36:18.185918  734640 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 13:36:18.201059  734640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33535
I0916 13:36:18.201538  734640 main.go:141] libmachine: () Calling .GetVersion
I0916 13:36:18.202135  734640 main.go:141] libmachine: Using API Version  1
I0916 13:36:18.202160  734640 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 13:36:18.202492  734640 main.go:141] libmachine: () Calling .GetMachineName
I0916 13:36:18.202697  734640 main.go:141] libmachine: (functional-983900) Calling .GetState
I0916 13:36:18.204473  734640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 13:36:18.204511  734640 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 13:36:18.219820  734640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38577
I0916 13:36:18.220307  734640 main.go:141] libmachine: () Calling .GetVersion
I0916 13:36:18.220856  734640 main.go:141] libmachine: Using API Version  1
I0916 13:36:18.220883  734640 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 13:36:18.221200  734640 main.go:141] libmachine: () Calling .GetMachineName
I0916 13:36:18.221373  734640 main.go:141] libmachine: (functional-983900) Calling .DriverName
I0916 13:36:18.221565  734640 ssh_runner.go:195] Run: systemctl --version
I0916 13:36:18.221597  734640 main.go:141] libmachine: (functional-983900) Calling .GetSSHHostname
I0916 13:36:18.224206  734640 main.go:141] libmachine: (functional-983900) DBG | domain functional-983900 has defined MAC address 52:54:00:f5:24:4c in network mk-functional-983900
I0916 13:36:18.224624  734640 main.go:141] libmachine: (functional-983900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:24:4c", ip: ""} in network mk-functional-983900: {Iface:virbr1 ExpiryTime:2024-09-16 14:33:45 +0000 UTC Type:0 Mac:52:54:00:f5:24:4c Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:functional-983900 Clientid:01:52:54:00:f5:24:4c}
I0916 13:36:18.224652  734640 main.go:141] libmachine: (functional-983900) DBG | domain functional-983900 has defined IP address 192.168.39.221 and MAC address 52:54:00:f5:24:4c in network mk-functional-983900
I0916 13:36:18.224824  734640 main.go:141] libmachine: (functional-983900) Calling .GetSSHPort
I0916 13:36:18.224962  734640 main.go:141] libmachine: (functional-983900) Calling .GetSSHKeyPath
I0916 13:36:18.225102  734640 main.go:141] libmachine: (functional-983900) Calling .GetSSHUsername
I0916 13:36:18.225234  734640 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/functional-983900/id_rsa Username:docker}
I0916 13:36:18.305111  734640 build_images.go:161] Building image from path: /tmp/build.2365964032.tar
I0916 13:36:18.305188  734640 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0916 13:36:18.317441  734640 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2365964032.tar
I0916 13:36:18.321820  734640 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2365964032.tar: stat -c "%s %y" /var/lib/minikube/build/build.2365964032.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2365964032.tar': No such file or directory
I0916 13:36:18.321849  734640 ssh_runner.go:362] scp /tmp/build.2365964032.tar --> /var/lib/minikube/build/build.2365964032.tar (3072 bytes)
I0916 13:36:18.348242  734640 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2365964032
I0916 13:36:18.361033  734640 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2365964032 -xf /var/lib/minikube/build/build.2365964032.tar
I0916 13:36:18.373981  734640 crio.go:315] Building image: /var/lib/minikube/build/build.2365964032
I0916 13:36:18.374048  734640 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-983900 /var/lib/minikube/build/build.2365964032 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0916 13:36:20.633303  734640 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-983900 /var/lib/minikube/build/build.2365964032 --cgroup-manager=cgroupfs: (2.259213853s)
I0916 13:36:20.633383  734640 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2365964032
I0916 13:36:20.645781  734640 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2365964032.tar
I0916 13:36:20.656762  734640 build_images.go:217] Built localhost/my-image:functional-983900 from /tmp/build.2365964032.tar
I0916 13:36:20.656803  734640 build_images.go:133] succeeded building to: functional-983900
I0916 13:36:20.656811  734640 build_images.go:134] failed building to: 
I0916 13:36:20.656841  734640 main.go:141] libmachine: Making call to close driver server
I0916 13:36:20.656855  734640 main.go:141] libmachine: (functional-983900) Calling .Close
I0916 13:36:20.657149  734640 main.go:141] libmachine: Successfully made call to close driver server
I0916 13:36:20.657171  734640 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 13:36:20.657181  734640 main.go:141] libmachine: Making call to close driver server
I0916 13:36:20.657183  734640 main.go:141] libmachine: (functional-983900) DBG | Closing plugin on server side
I0916 13:36:20.657188  734640 main.go:141] libmachine: (functional-983900) Calling .Close
I0916 13:36:20.657440  734640 main.go:141] libmachine: Successfully made call to close driver server
I0916 13:36:20.657458  734640 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-983900
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 image load --daemon kicbase/echo-server:functional-983900 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-983900 image load --daemon kicbase/echo-server:functional-983900 --alsologtostderr: (1.724961322s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 image load --daemon kicbase/echo-server:functional-983900 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-983900
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 image load --daemon kicbase/echo-server:functional-983900 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 image save kicbase/echo-server:functional-983900 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 image rm kicbase/echo-server:functional-983900 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-983900 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.46575877s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-983900 /tmp/TestFunctionalparallelMountCmdspecific-port2165361968/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-983900 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (220.823564ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-983900 /tmp/TestFunctionalparallelMountCmdspecific-port2165361968/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-983900 ssh "sudo umount -f /mount-9p": exit status 1 (238.374358ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-983900 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-983900 /tmp/TestFunctionalparallelMountCmdspecific-port2165361968/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-983900
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 image save --daemon kicbase/echo-server:functional-983900 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-983900
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-983900 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1650522696/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-983900 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1650522696/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-983900 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1650522696/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-983900 ssh "findmnt -T" /mount1: exit status 1 (281.890129ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-983900 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-983900 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-983900 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1650522696/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-983900 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1650522696/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-983900 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1650522696/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-983900
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-983900
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-983900
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (190.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-190751 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-190751 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m9.836825314s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (190.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-190751 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-190751 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-190751 -- rollout status deployment/busybox: (3.515165725s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-190751 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-190751 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-190751 -- exec busybox-7dff88458-lsqcp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-190751 -- exec busybox-7dff88458-w6sc6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-190751 -- exec busybox-7dff88458-wnt5k -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-190751 -- exec busybox-7dff88458-lsqcp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-190751 -- exec busybox-7dff88458-w6sc6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-190751 -- exec busybox-7dff88458-wnt5k -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-190751 -- exec busybox-7dff88458-lsqcp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-190751 -- exec busybox-7dff88458-w6sc6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-190751 -- exec busybox-7dff88458-wnt5k -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-190751 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-190751 -- exec busybox-7dff88458-lsqcp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-190751 -- exec busybox-7dff88458-lsqcp -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-190751 -- exec busybox-7dff88458-w6sc6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-190751 -- exec busybox-7dff88458-w6sc6 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-190751 -- exec busybox-7dff88458-wnt5k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-190751 -- exec busybox-7dff88458-wnt5k -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-190751 -v=7 --alsologtostderr
E0916 13:40:50.207022  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
E0916 13:40:50.214363  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
E0916 13:40:50.225902  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
E0916 13:40:50.247375  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
E0916 13:40:50.288849  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
E0916 13:40:50.370128  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
E0916 13:40:50.531666  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
E0916 13:40:50.853502  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
E0916 13:40:51.495708  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
E0916 13:40:52.777366  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
E0916 13:40:55.339308  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
E0916 13:41:00.460626  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-190751 -v=7 --alsologtostderr: (56.652200781s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 status -v=7 --alsologtostderr
E0916 13:41:10.702896  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-190751 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 cp testdata/cp-test.txt ha-190751:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 cp ha-190751:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3557247571/001/cp-test_ha-190751.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 cp ha-190751:/home/docker/cp-test.txt ha-190751-m02:/home/docker/cp-test_ha-190751_ha-190751-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751-m02 "sudo cat /home/docker/cp-test_ha-190751_ha-190751-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 cp ha-190751:/home/docker/cp-test.txt ha-190751-m03:/home/docker/cp-test_ha-190751_ha-190751-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751-m03 "sudo cat /home/docker/cp-test_ha-190751_ha-190751-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 cp ha-190751:/home/docker/cp-test.txt ha-190751-m04:/home/docker/cp-test_ha-190751_ha-190751-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751-m04 "sudo cat /home/docker/cp-test_ha-190751_ha-190751-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 cp testdata/cp-test.txt ha-190751-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 cp ha-190751-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3557247571/001/cp-test_ha-190751-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 cp ha-190751-m02:/home/docker/cp-test.txt ha-190751:/home/docker/cp-test_ha-190751-m02_ha-190751.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751 "sudo cat /home/docker/cp-test_ha-190751-m02_ha-190751.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 cp ha-190751-m02:/home/docker/cp-test.txt ha-190751-m03:/home/docker/cp-test_ha-190751-m02_ha-190751-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751-m03 "sudo cat /home/docker/cp-test_ha-190751-m02_ha-190751-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 cp ha-190751-m02:/home/docker/cp-test.txt ha-190751-m04:/home/docker/cp-test_ha-190751-m02_ha-190751-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751-m04 "sudo cat /home/docker/cp-test_ha-190751-m02_ha-190751-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 cp testdata/cp-test.txt ha-190751-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 cp ha-190751-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3557247571/001/cp-test_ha-190751-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 cp ha-190751-m03:/home/docker/cp-test.txt ha-190751:/home/docker/cp-test_ha-190751-m03_ha-190751.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751 "sudo cat /home/docker/cp-test_ha-190751-m03_ha-190751.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 cp ha-190751-m03:/home/docker/cp-test.txt ha-190751-m02:/home/docker/cp-test_ha-190751-m03_ha-190751-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751-m02 "sudo cat /home/docker/cp-test_ha-190751-m03_ha-190751-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 cp ha-190751-m03:/home/docker/cp-test.txt ha-190751-m04:/home/docker/cp-test_ha-190751-m03_ha-190751-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751-m04 "sudo cat /home/docker/cp-test_ha-190751-m03_ha-190751-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 cp testdata/cp-test.txt ha-190751-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 cp ha-190751-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3557247571/001/cp-test_ha-190751-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 cp ha-190751-m04:/home/docker/cp-test.txt ha-190751:/home/docker/cp-test_ha-190751-m04_ha-190751.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751 "sudo cat /home/docker/cp-test_ha-190751-m04_ha-190751.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 cp ha-190751-m04:/home/docker/cp-test.txt ha-190751-m02:/home/docker/cp-test_ha-190751-m04_ha-190751-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751-m02 "sudo cat /home/docker/cp-test_ha-190751-m04_ha-190751-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 cp ha-190751-m04:/home/docker/cp-test.txt ha-190751-m03:/home/docker/cp-test_ha-190751-m04_ha-190751-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 ssh -n ha-190751-m03 "sudo cat /home/docker/cp-test_ha-190751-m04_ha-190751-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.487055217s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-190751 node delete m03 -v=7 --alsologtostderr: (15.928945083s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (444.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-190751 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0916 13:55:50.207494  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
E0916 13:57:13.273142  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
E0916 14:00:50.206551  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-190751 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m23.196924155s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (444.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (72.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-190751 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-190751 --control-plane -v=7 --alsologtostderr: (1m12.031831889s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-190751 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (72.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (87.77s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-267212 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-267212 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m27.768521195s)
--- PASS: TestJSONOutput/start/Command (87.77s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-267212 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-267212 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-267212 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-267212 --output=json --user=testUser: (7.347243984s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-664910 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-664910 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (58.534774ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6becd1c9-6cb6-493a-91e3-d4fa06d02c75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-664910] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"55e65435-317b-4c42-adce-b1dc8ffe0b24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19652"}}
	{"specversion":"1.0","id":"192c8459-3f94-4af9-864b-8ac9e0e8e6a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e38b26a3-2677-4d1a-85cc-3dfedeebc58c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19652-713072/kubeconfig"}}
	{"specversion":"1.0","id":"7ed1dcc7-19fc-40f3-92d3-554c17aa7377","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19652-713072/.minikube"}}
	{"specversion":"1.0","id":"49b8c260-1f15-424a-aa40-c21a778afd04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"7a10bbb5-1b2c-4df8-aba1-bea04b33c968","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fd64e52e-f361-4bf3-ad19-122f9c548b47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-664910" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-664910
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (89.44s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-697973 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-697973 --driver=kvm2  --container-runtime=crio: (42.044039056s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-711143 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-711143 --driver=kvm2  --container-runtime=crio: (44.800002136s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-697973
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-711143
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-711143" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-711143
helpers_test.go:175: Cleaning up "first-697973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-697973
--- PASS: TestMinikubeProfile (89.44s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-589783 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0916 14:05:50.207051  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-589783 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.341854789s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-589783 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-589783 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-608111 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-608111 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.389695501s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-608111 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-608111 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-589783 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-608111 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-608111 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-608111
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-608111: (1.269444627s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.25s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-608111
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-608111: (21.252421873s)
--- PASS: TestMountStart/serial/RestartStopped (22.25s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-608111 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-608111 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (107.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-561755 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-561755 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m47.246764175s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (107.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-561755 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-561755 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-561755 -- rollout status deployment/busybox: (3.537888098s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-561755 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-561755 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-561755 -- exec busybox-7dff88458-f9c5w -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-561755 -- exec busybox-7dff88458-tsgxx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-561755 -- exec busybox-7dff88458-f9c5w -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-561755 -- exec busybox-7dff88458-tsgxx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-561755 -- exec busybox-7dff88458-f9c5w -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-561755 -- exec busybox-7dff88458-tsgxx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.97s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-561755 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-561755 -- exec busybox-7dff88458-f9c5w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-561755 -- exec busybox-7dff88458-f9c5w -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-561755 -- exec busybox-7dff88458-tsgxx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-561755 -- exec busybox-7dff88458-tsgxx -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-561755 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-561755 -v 3 --alsologtostderr: (53.00852057s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.56s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-561755 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 cp testdata/cp-test.txt multinode-561755:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 ssh -n multinode-561755 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 cp multinode-561755:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1710468598/001/cp-test_multinode-561755.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 ssh -n multinode-561755 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 cp multinode-561755:/home/docker/cp-test.txt multinode-561755-m02:/home/docker/cp-test_multinode-561755_multinode-561755-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 ssh -n multinode-561755 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 ssh -n multinode-561755-m02 "sudo cat /home/docker/cp-test_multinode-561755_multinode-561755-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 cp multinode-561755:/home/docker/cp-test.txt multinode-561755-m03:/home/docker/cp-test_multinode-561755_multinode-561755-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 ssh -n multinode-561755 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 ssh -n multinode-561755-m03 "sudo cat /home/docker/cp-test_multinode-561755_multinode-561755-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 cp testdata/cp-test.txt multinode-561755-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 ssh -n multinode-561755-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 cp multinode-561755-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1710468598/001/cp-test_multinode-561755-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 ssh -n multinode-561755-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 cp multinode-561755-m02:/home/docker/cp-test.txt multinode-561755:/home/docker/cp-test_multinode-561755-m02_multinode-561755.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 ssh -n multinode-561755-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 ssh -n multinode-561755 "sudo cat /home/docker/cp-test_multinode-561755-m02_multinode-561755.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 cp multinode-561755-m02:/home/docker/cp-test.txt multinode-561755-m03:/home/docker/cp-test_multinode-561755-m02_multinode-561755-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 ssh -n multinode-561755-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 ssh -n multinode-561755-m03 "sudo cat /home/docker/cp-test_multinode-561755-m02_multinode-561755-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 cp testdata/cp-test.txt multinode-561755-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 ssh -n multinode-561755-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 cp multinode-561755-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1710468598/001/cp-test_multinode-561755-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 ssh -n multinode-561755-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 cp multinode-561755-m03:/home/docker/cp-test.txt multinode-561755:/home/docker/cp-test_multinode-561755-m03_multinode-561755.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 ssh -n multinode-561755-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 ssh -n multinode-561755 "sudo cat /home/docker/cp-test_multinode-561755-m03_multinode-561755.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 cp multinode-561755-m03:/home/docker/cp-test.txt multinode-561755-m02:/home/docker/cp-test_multinode-561755-m03_multinode-561755-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 ssh -n multinode-561755-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 ssh -n multinode-561755-m02 "sudo cat /home/docker/cp-test_multinode-561755-m03_multinode-561755-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-561755 node stop m03: (1.438349812s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-561755 status: exit status 7 (428.064351ms)

                                                
                                                
-- stdout --
	multinode-561755
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-561755-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-561755-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-561755 status --alsologtostderr: exit status 7 (420.887099ms)

                                                
                                                
-- stdout --
	multinode-561755
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-561755-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-561755-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 14:10:04.131443  752415 out.go:345] Setting OutFile to fd 1 ...
	I0916 14:10:04.131549  752415 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 14:10:04.131561  752415 out.go:358] Setting ErrFile to fd 2...
	I0916 14:10:04.131565  752415 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 14:10:04.131741  752415 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19652-713072/.minikube/bin
	I0916 14:10:04.131920  752415 out.go:352] Setting JSON to false
	I0916 14:10:04.131946  752415 mustload.go:65] Loading cluster: multinode-561755
	I0916 14:10:04.132061  752415 notify.go:220] Checking for updates...
	I0916 14:10:04.132389  752415 config.go:182] Loaded profile config "multinode-561755": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 14:10:04.132409  752415 status.go:255] checking status of multinode-561755 ...
	I0916 14:10:04.132912  752415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 14:10:04.132980  752415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 14:10:04.148796  752415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42731
	I0916 14:10:04.149280  752415 main.go:141] libmachine: () Calling .GetVersion
	I0916 14:10:04.149948  752415 main.go:141] libmachine: Using API Version  1
	I0916 14:10:04.149968  752415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 14:10:04.150347  752415 main.go:141] libmachine: () Calling .GetMachineName
	I0916 14:10:04.150588  752415 main.go:141] libmachine: (multinode-561755) Calling .GetState
	I0916 14:10:04.152064  752415 status.go:330] multinode-561755 host status = "Running" (err=<nil>)
	I0916 14:10:04.152084  752415 host.go:66] Checking if "multinode-561755" exists ...
	I0916 14:10:04.152519  752415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 14:10:04.152577  752415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 14:10:04.167859  752415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46221
	I0916 14:10:04.168334  752415 main.go:141] libmachine: () Calling .GetVersion
	I0916 14:10:04.168855  752415 main.go:141] libmachine: Using API Version  1
	I0916 14:10:04.168878  752415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 14:10:04.169186  752415 main.go:141] libmachine: () Calling .GetMachineName
	I0916 14:10:04.169367  752415 main.go:141] libmachine: (multinode-561755) Calling .GetIP
	I0916 14:10:04.172066  752415 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:10:04.172421  752415 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:10:04.172454  752415 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:10:04.172549  752415 host.go:66] Checking if "multinode-561755" exists ...
	I0916 14:10:04.172831  752415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 14:10:04.172865  752415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 14:10:04.187885  752415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39275
	I0916 14:10:04.188367  752415 main.go:141] libmachine: () Calling .GetVersion
	I0916 14:10:04.188844  752415 main.go:141] libmachine: Using API Version  1
	I0916 14:10:04.188864  752415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 14:10:04.189164  752415 main.go:141] libmachine: () Calling .GetMachineName
	I0916 14:10:04.189326  752415 main.go:141] libmachine: (multinode-561755) Calling .DriverName
	I0916 14:10:04.189505  752415 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 14:10:04.189526  752415 main.go:141] libmachine: (multinode-561755) Calling .GetSSHHostname
	I0916 14:10:04.192163  752415 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:10:04.192539  752415 main.go:141] libmachine: (multinode-561755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:41", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:07:22 +0000 UTC Type:0 Mac:52:54:00:15:a3:41 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-561755 Clientid:01:52:54:00:15:a3:41}
	I0916 14:10:04.192563  752415 main.go:141] libmachine: (multinode-561755) DBG | domain multinode-561755 has defined IP address 192.168.39.163 and MAC address 52:54:00:15:a3:41 in network mk-multinode-561755
	I0916 14:10:04.192713  752415 main.go:141] libmachine: (multinode-561755) Calling .GetSSHPort
	I0916 14:10:04.192883  752415 main.go:141] libmachine: (multinode-561755) Calling .GetSSHKeyPath
	I0916 14:10:04.193032  752415 main.go:141] libmachine: (multinode-561755) Calling .GetSSHUsername
	I0916 14:10:04.193164  752415 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/multinode-561755/id_rsa Username:docker}
	I0916 14:10:04.277268  752415 ssh_runner.go:195] Run: systemctl --version
	I0916 14:10:04.283596  752415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 14:10:04.298154  752415 kubeconfig.go:125] found "multinode-561755" server: "https://192.168.39.163:8443"
	I0916 14:10:04.298194  752415 api_server.go:166] Checking apiserver status ...
	I0916 14:10:04.298241  752415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 14:10:04.311693  752415 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1100/cgroup
	W0916 14:10:04.320838  752415 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1100/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 14:10:04.320897  752415 ssh_runner.go:195] Run: ls
	I0916 14:10:04.325171  752415 api_server.go:253] Checking apiserver healthz at https://192.168.39.163:8443/healthz ...
	I0916 14:10:04.329519  752415 api_server.go:279] https://192.168.39.163:8443/healthz returned 200:
	ok
	I0916 14:10:04.329541  752415 status.go:422] multinode-561755 apiserver status = Running (err=<nil>)
	I0916 14:10:04.329551  752415 status.go:257] multinode-561755 status: &{Name:multinode-561755 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 14:10:04.329567  752415 status.go:255] checking status of multinode-561755-m02 ...
	I0916 14:10:04.329887  752415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 14:10:04.329945  752415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 14:10:04.345318  752415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44807
	I0916 14:10:04.345754  752415 main.go:141] libmachine: () Calling .GetVersion
	I0916 14:10:04.346274  752415 main.go:141] libmachine: Using API Version  1
	I0916 14:10:04.346301  752415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 14:10:04.346634  752415 main.go:141] libmachine: () Calling .GetMachineName
	I0916 14:10:04.346797  752415 main.go:141] libmachine: (multinode-561755-m02) Calling .GetState
	I0916 14:10:04.348218  752415 status.go:330] multinode-561755-m02 host status = "Running" (err=<nil>)
	I0916 14:10:04.348235  752415 host.go:66] Checking if "multinode-561755-m02" exists ...
	I0916 14:10:04.348546  752415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 14:10:04.348589  752415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 14:10:04.363784  752415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41925
	I0916 14:10:04.364239  752415 main.go:141] libmachine: () Calling .GetVersion
	I0916 14:10:04.364711  752415 main.go:141] libmachine: Using API Version  1
	I0916 14:10:04.364731  752415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 14:10:04.365023  752415 main.go:141] libmachine: () Calling .GetMachineName
	I0916 14:10:04.365192  752415 main.go:141] libmachine: (multinode-561755-m02) Calling .GetIP
	I0916 14:10:04.367527  752415 main.go:141] libmachine: (multinode-561755-m02) DBG | domain multinode-561755-m02 has defined MAC address 52:54:00:37:14:13 in network mk-multinode-561755
	I0916 14:10:04.367858  752415 main.go:141] libmachine: (multinode-561755-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:14:13", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:08:21 +0000 UTC Type:0 Mac:52:54:00:37:14:13 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:multinode-561755-m02 Clientid:01:52:54:00:37:14:13}
	I0916 14:10:04.367911  752415 main.go:141] libmachine: (multinode-561755-m02) DBG | domain multinode-561755-m02 has defined IP address 192.168.39.34 and MAC address 52:54:00:37:14:13 in network mk-multinode-561755
	I0916 14:10:04.368043  752415 host.go:66] Checking if "multinode-561755-m02" exists ...
	I0916 14:10:04.368364  752415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 14:10:04.368402  752415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 14:10:04.384054  752415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35963
	I0916 14:10:04.384528  752415 main.go:141] libmachine: () Calling .GetVersion
	I0916 14:10:04.385084  752415 main.go:141] libmachine: Using API Version  1
	I0916 14:10:04.385114  752415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 14:10:04.385412  752415 main.go:141] libmachine: () Calling .GetMachineName
	I0916 14:10:04.385596  752415 main.go:141] libmachine: (multinode-561755-m02) Calling .DriverName
	I0916 14:10:04.385818  752415 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 14:10:04.385840  752415 main.go:141] libmachine: (multinode-561755-m02) Calling .GetSSHHostname
	I0916 14:10:04.388610  752415 main.go:141] libmachine: (multinode-561755-m02) DBG | domain multinode-561755-m02 has defined MAC address 52:54:00:37:14:13 in network mk-multinode-561755
	I0916 14:10:04.388984  752415 main.go:141] libmachine: (multinode-561755-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:14:13", ip: ""} in network mk-multinode-561755: {Iface:virbr1 ExpiryTime:2024-09-16 15:08:21 +0000 UTC Type:0 Mac:52:54:00:37:14:13 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:multinode-561755-m02 Clientid:01:52:54:00:37:14:13}
	I0916 14:10:04.389019  752415 main.go:141] libmachine: (multinode-561755-m02) DBG | domain multinode-561755-m02 has defined IP address 192.168.39.34 and MAC address 52:54:00:37:14:13 in network mk-multinode-561755
	I0916 14:10:04.389142  752415 main.go:141] libmachine: (multinode-561755-m02) Calling .GetSSHPort
	I0916 14:10:04.389296  752415 main.go:141] libmachine: (multinode-561755-m02) Calling .GetSSHKeyPath
	I0916 14:10:04.389444  752415 main.go:141] libmachine: (multinode-561755-m02) Calling .GetSSHUsername
	I0916 14:10:04.389549  752415 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19652-713072/.minikube/machines/multinode-561755-m02/id_rsa Username:docker}
	I0916 14:10:04.473004  752415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 14:10:04.488396  752415 status.go:257] multinode-561755-m02 status: &{Name:multinode-561755-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0916 14:10:04.488432  752415 status.go:255] checking status of multinode-561755-m03 ...
	I0916 14:10:04.488757  752415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 14:10:04.488797  752415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 14:10:04.504669  752415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45503
	I0916 14:10:04.505189  752415 main.go:141] libmachine: () Calling .GetVersion
	I0916 14:10:04.505694  752415 main.go:141] libmachine: Using API Version  1
	I0916 14:10:04.505718  752415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 14:10:04.506055  752415 main.go:141] libmachine: () Calling .GetMachineName
	I0916 14:10:04.506273  752415 main.go:141] libmachine: (multinode-561755-m03) Calling .GetState
	I0916 14:10:04.507917  752415 status.go:330] multinode-561755-m03 host status = "Stopped" (err=<nil>)
	I0916 14:10:04.507935  752415 status.go:343] host is not running, skipping remaining checks
	I0916 14:10:04.507943  752415 status.go:257] multinode-561755-m03 status: &{Name:multinode-561755-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-561755 node start m03 -v=7 --alsologtostderr: (37.032304336s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-561755 node delete m03: (1.782822475s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (206.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-561755 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0916 14:20:50.206945  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-561755 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m25.557906547s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-561755 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (206.08s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-561755
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-561755-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-561755-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (62.494257ms)

                                                
                                                
-- stdout --
	* [multinode-561755-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19652
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19652-713072/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19652-713072/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-561755-m02' is duplicated with machine name 'multinode-561755-m02' in profile 'multinode-561755'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-561755-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-561755-m03 --driver=kvm2  --container-runtime=crio: (44.768076651s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-561755
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-561755: exit status 80 (207.787113ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-561755 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-561755-m03 already exists in multinode-561755-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-561755-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.05s)

                                                
                                    
x
+
TestScheduledStopUnix (115.69s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-375641 --memory=2048 --driver=kvm2  --container-runtime=crio
E0916 14:25:50.206991  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-375641 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.159537732s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-375641 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-375641 -n scheduled-stop-375641
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-375641 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-375641 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-375641 -n scheduled-stop-375641
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-375641
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-375641 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-375641
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-375641: exit status 7 (64.702761ms)

                                                
                                                
-- stdout --
	scheduled-stop-375641
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-375641 -n scheduled-stop-375641
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-375641 -n scheduled-stop-375641: exit status 7 (64.194291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-375641" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-375641
--- PASS: TestScheduledStopUnix (115.69s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (184.02s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3431671921 start -p running-upgrade-833381 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3431671921 start -p running-upgrade-833381 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m36.016483225s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-833381 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-833381 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m26.542049369s)
helpers_test.go:175: Cleaning up "running-upgrade-833381" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-833381
--- PASS: TestRunningBinaryUpgrade (184.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (164.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.620510946 start -p stopped-upgrade-692314 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.620510946 start -p stopped-upgrade-692314 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m59.225171231s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.620510946 -p stopped-upgrade-692314 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.620510946 -p stopped-upgrade-692314 stop: (2.141726789s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-692314 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-692314 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.556617302s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (164.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-692314
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-772968 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-772968 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (59.547846ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-772968] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19652
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19652-713072/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19652-713072/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (74.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-772968 --driver=kvm2  --container-runtime=crio
E0916 14:30:50.206688  720544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19652-713072/.minikube/profiles/functional-983900/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-772968 --driver=kvm2  --container-runtime=crio: (1m14.28906942s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-772968 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (74.54s)

                                                
                                    
x
+
TestPause/serial/Start (57.02s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-563108 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-563108 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (57.017455323s)
--- PASS: TestPause/serial/Start (57.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-772968 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-772968 --no-kubernetes --driver=kvm2  --container-runtime=crio: (16.988434947s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-772968 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-772968 status -o json: exit status 2 (261.178281ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-772968","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-772968
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (41.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-772968 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-772968 --no-kubernetes --driver=kvm2  --container-runtime=crio: (41.66237311s)
--- PASS: TestNoKubernetes/serial/Start (41.66s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (63.1s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-563108 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-563108 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.071528189s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (63.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-772968 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-772968 "sudo systemctl is-active --quiet service kubelet": exit status 1 (191.481384ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-772968
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-772968: (1.274985927s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (31.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-772968 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-772968 --driver=kvm2  --container-runtime=crio: (31.343377125s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (31.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-772968 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-772968 "sudo systemctl is-active --quiet service kubelet": exit status 1 (201.306824ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestPause/serial/Pause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-563108 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.86s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-563108 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-563108 --output=json --layout=cluster: exit status 2 (278.843207ms)

                                                
                                                
-- stdout --
	{"Name":"pause-563108","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-563108","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-563108 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.74s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.93s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-563108 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.93s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.08s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-563108 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-563108 --alsologtostderr -v=5: (1.080278642s)
--- PASS: TestPause/serial/DeletePaused (1.08s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                    

Test skip (32/213)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard